Nov 24 12:27:46 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 12:27:46 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:46 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 12:27:47 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 12:27:48 crc kubenswrapper[4756]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.195523 4756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.198973 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199002 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199008 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199012 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199023 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199028 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199031 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199035 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199039 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199043 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199048 4756 feature_gate.go:330] unrecognized feature gate: Example Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199052 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199056 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199060 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199063 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199066 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199070 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199074 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199077 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199081 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199084 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199087 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199091 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199094 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199098 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199101 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199104 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199108 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199111 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199115 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199118 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199121 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199125 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199128 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199132 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199135 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199139 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199142 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199146 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199150 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199153 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199173 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199177 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199181 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199187 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199190 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199193 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199197 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199202 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199207 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199211 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199216 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199220 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199224 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199227 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199231 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199234 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199238 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199242 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199246 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199250 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199253 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199257 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199261 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199264 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199268 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199272 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199276 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199280 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199284 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.199289 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199382 4756 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199392 4756 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199399 4756 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199405 4756 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199411 4756 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199415 4756 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199420 4756 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199425 4756 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199430 4756 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199434 4756 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199438 4756 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199443 4756 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199447 4756 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199451 4756 flags.go:64] FLAG: --cgroup-root="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199455 4756 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199459 4756 flags.go:64] FLAG: --client-ca-file="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199463 4756 flags.go:64] FLAG: --cloud-config="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199468 4756 flags.go:64] FLAG: --cloud-provider="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199471 4756 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199477 4756 flags.go:64] FLAG: --cluster-domain="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199481 4756 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199485 4756 flags.go:64] FLAG: --config-dir="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199490 4756 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199494 4756 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199499 4756 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199503 4756 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199507 4756 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199512 4756 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199516 4756 flags.go:64] FLAG: --contention-profiling="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199520 4756 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199524 4756 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199528 4756 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199532 4756 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199537 4756 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199542 4756 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199546 4756 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199551 4756 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199556 4756 flags.go:64] FLAG: --enable-server="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199561 4756 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199568 4756 flags.go:64] FLAG: --event-burst="100" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199573 4756 flags.go:64] FLAG: --event-qps="50" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199578 4756 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199583 4756 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199587 4756 flags.go:64] FLAG: --eviction-hard="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199593 4756 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199597 4756 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199601 4756 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.199839 4756 flags.go:64] FLAG: --eviction-soft="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201197 4756 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201213 4756 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201228 4756 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201249 4756 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201262 4756 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201272 4756 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201281 4756 flags.go:64] FLAG: --feature-gates="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201343 4756 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201355 4756 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201367 4756 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201377 4756 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201390 4756 flags.go:64] FLAG: --healthz-port="10248" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201410 4756 flags.go:64] FLAG: --help="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201433 4756 flags.go:64] FLAG: --hostname-override="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201445 4756 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201460 4756 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201490 4756 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201500 4756 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.201995 4756 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202014 4756 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202024 4756 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202033 4756 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202045 4756 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202056 4756 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202067 4756 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202076 4756 flags.go:64] FLAG: --kube-reserved="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202087 4756 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202098 4756 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202108 4756 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202118 4756 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202128 4756 flags.go:64] FLAG: --lock-file="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202140 4756 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202151 4756 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202192 4756 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202233 4756 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202243 4756 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202253 4756 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202262 4756 flags.go:64] FLAG: --logging-format="text" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202272 4756 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202283 4756 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202292 4756 flags.go:64] FLAG: --manifest-url="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202300 4756 flags.go:64] FLAG: --manifest-url-header="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202315 4756 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202325 4756 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202338 4756 flags.go:64] FLAG: --max-pods="110" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202347 4756 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202357 4756 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202367 4756 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202376 4756 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202387 4756 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202397 4756 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202406 4756 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202443 4756 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202454 4756 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202467 4756 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202479 4756 flags.go:64] FLAG: --pod-cidr="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202490 4756 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202511 4756 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202520 4756 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202530 4756 flags.go:64] FLAG: --pods-per-core="0" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202539 4756 flags.go:64] FLAG: --port="10250" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202549 4756 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202558 4756 flags.go:64] FLAG: --provider-id="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202568 4756 flags.go:64] FLAG: --qos-reserved="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202579 4756 flags.go:64] FLAG: --read-only-port="10255" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202589 4756 flags.go:64] FLAG: --register-node="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202597 4756 flags.go:64] FLAG: --register-schedulable="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202606 4756 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202623 4756 flags.go:64] FLAG: --registry-burst="10" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202633 4756 flags.go:64] FLAG: --registry-qps="5" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202642 4756 flags.go:64] FLAG: --reserved-cpus="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202651 4756 flags.go:64] FLAG: --reserved-memory="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202662 4756 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202672 4756 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202681 4756 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202691 4756 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202699 4756 flags.go:64] FLAG: --runonce="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202709 4756 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202718 4756 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202727 4756 flags.go:64] FLAG: --seccomp-default="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202736 4756 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202746 4756 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202755 4756 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202765 4756 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202775 4756 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202784 4756 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202793 4756 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202802 4756 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202811 4756 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202821 4756 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202831 4756 flags.go:64] FLAG: --system-cgroups="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202839 4756 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202855 4756 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202863 4756 flags.go:64] FLAG: --tls-cert-file="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202872 4756 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202889 4756 flags.go:64] FLAG: --tls-min-version="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202899 4756 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202908 4756 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202917 4756 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202927 4756 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202937 4756 flags.go:64] FLAG: --v="2" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202950 4756 flags.go:64] FLAG: --version="false" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202961 4756 flags.go:64] FLAG: --vmodule="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202973 4756 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.202982 4756 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203336 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203349 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203358 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203368 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203377 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203385 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203395 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203402 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203411 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203419 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203428 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203437 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203444 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203456 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203467 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203477 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203486 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203495 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203503 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203512 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203522 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203530 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203540 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203549 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203557 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203567 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203575 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203583 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203591 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203599 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203607 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203616 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203624 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203632 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203640 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203648 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203655 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203664 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203672 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203680 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203687 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203695 4756 feature_gate.go:330] unrecognized feature gate: Example Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203704 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203713 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203720 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203728 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203737 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203745 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203756 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203764 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203772 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203780 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203787 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203795 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203807 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203817 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203825 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203833 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203843 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203852 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203860 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203867 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203878 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203888 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203899 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203908 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203917 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203925 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203933 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203941 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.203949 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.203972 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.217784 4756 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.217875 4756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218005 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218018 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218029 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218038 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218047 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218055 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218063 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218071 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218079 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218088 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218096 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218104 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218114 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218126 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218136 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218146 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218188 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218201 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218213 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218222 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218231 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218239 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218251 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218261 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218270 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218280 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218288 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218296 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218303 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218311 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218319 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218327 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218337 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218348 4756 feature_gate.go:330] unrecognized feature gate: Example Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218361 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218371 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218379 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218387 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218394 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218403 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218411 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218420 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218428 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218435 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218443 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218451 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218459 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218467 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218475 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218482 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218490 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218498 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218506 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218514 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218521 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218529 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218539 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218548 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218557 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218564 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218572 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218579 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218587 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218595 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218603 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218611 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218619 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218627 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218634 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218642 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218651 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.218666 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218969 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.218993 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219005 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219016 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219027 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219038 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219052 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219066 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219077 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219087 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219099 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219109 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219118 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219128 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219138 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219147 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219203 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219224 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219235 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219243 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219252 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219260 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219267 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219276 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219284 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219292 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219300 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219307 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219318 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219327 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219336 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219346 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219360 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219382 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219401 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219412 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219424 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219434 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219444 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219453 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219460 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219468 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219476 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219485 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219493 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219501 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219509 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219517 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219524 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219532 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219540 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219548 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219556 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219563 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219571 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219578 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219587 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219595 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219602 4756 feature_gate.go:330] unrecognized feature gate: Example Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219610 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219617 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219625 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219636 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219646 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219656 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219670 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219680 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219690 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219699 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219707 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.219716 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.219729 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.220086 4756 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.226311 4756 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.226445 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.228661 4756 server.go:997] "Starting client certificate rotation" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.228702 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.228927 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-29 14:38:19.579903825 +0000 UTC Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.229052 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 842h10m31.350859658s for next certificate rotation Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.267376 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.273551 4756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.299327 4756 log.go:25] "Validated CRI v1 runtime API" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.345446 4756 log.go:25] "Validated CRI v1 image API" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.349103 4756 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.354865 4756 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-12-22-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.354921 4756 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.386625 4756 manager.go:217] Machine: {Timestamp:2025-11-24 12:27:48.383733111 +0000 UTC m=+0.741247353 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:76b0c406-a550-4a16-95f4-45deb24662b5 BootID:a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:71:6d:b4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:71:6d:b4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ff:e4:5f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b2:a3:d0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:70:7d:a0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:15:23:fe Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:ca:34:05:0d:fb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:17:f5:1a:bb:f2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.387070 4756 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.387338 4756 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.389582 4756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.390039 4756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.390098 4756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.391134 4756 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.391209 4756 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.391796 4756 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.391839 4756 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.392289 4756 state_mem.go:36] "Initialized new in-memory state store" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.392489 4756 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.398454 4756 kubelet.go:418] "Attempting to sync node with API server" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.398507 4756 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.398557 4756 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.398598 4756 kubelet.go:324] "Adding apiserver pod source" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.398629 4756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.403670 4756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.405012 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.406539 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.406611 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.406660 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.406715 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.408629 4756 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410380 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410430 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410449 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410467 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410490 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410506 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410520 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410543 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410562 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410586 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410640 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410656 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.410693 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.411592 4756 server.go:1280] "Started kubelet" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.412012 4756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.412134 4756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.412389 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.413220 4756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 12:27:48 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.415114 4756 server.go:460] "Adding debug handlers to kubelet server" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.419760 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.419851 4756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.420012 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:28:26.493998484 +0000 UTC Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.420342 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.420381 4756 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.420391 4756 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.420445 4756 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.420938 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.421677 4756 factory.go:55] Registering systemd factory Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.421702 4756 factory.go:221] Registration of the systemd container factory successfully Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.421921 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.422806 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.423299 4756 factory.go:153] Registering CRI-O factory Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.423323 4756 factory.go:221] Registration of the crio container factory successfully Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.423398 4756 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.423428 4756 factory.go:103] Registering Raw factory Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.423527 4756 manager.go:1196] Started watching for new ooms in manager Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.425090 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187af11319faebf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 12:27:48.41151794 +0000 UTC m=+0.769032122,LastTimestamp:2025-11-24 12:27:48.41151794 +0000 UTC m=+0.769032122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.429594 4756 manager.go:319] Starting recovery of all containers Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432836 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432905 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432921 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432934 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432946 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432959 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432971 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432983 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.432997 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433010 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433022 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433034 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433046 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433063 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433101 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433115 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433146 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433175 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433208 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433220 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433232 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433246 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433259 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433270 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433282 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433295 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433312 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433327 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433341 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433353 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433366 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433379 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433391 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433404 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433456 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433469 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433483 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433497 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433515 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433529 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433544 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433563 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433576 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433590 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433603 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433616 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433629 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433643 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433656 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433669 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433681 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433694 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433713 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433729 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433744 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433758 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433810 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433822 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433835 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433850 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433866 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433882 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433900 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433919 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433934 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433945 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433958 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433970 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.433984 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435847 4756 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435895 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435915 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435929 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435945 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435960 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435975 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.435988 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436003 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436016 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436028 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436040 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436056 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436069 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436082 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436095 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436109 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436125 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436139 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436153 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436185 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436202 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436215 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436228 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436241 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436254 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436283 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436296 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436309 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436323 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436337 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436350 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436363 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436377 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436390 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436402 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436423 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436437 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436452 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436474 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436492 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436507 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436522 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436535 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436549 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436563 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436577 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436590 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436604 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436619 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436633 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436649 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436662 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436678 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436691 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436704 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436718 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436730 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436742 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436756 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436770 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436785 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436798 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436814 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436827 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436843 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436856 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436870 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436883 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436896 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436909 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436922 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436938 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436951 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436963 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.436977 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437039 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437052 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437065 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437079 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437093 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437107 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437121 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437134 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437146 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437177 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437190 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437203 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437216 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437228 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437242 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437255 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437268 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437280 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437293 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437306 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437320 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437333 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437346 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437358 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437370 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437382 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437396 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437408 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437422 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437437 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437452 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437471 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437488 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437505 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437522 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437539 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437553 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437570 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437582 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437597 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437611 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437625 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437638 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437652 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437665 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437678 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437738 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437753 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437797 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437812 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437825 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437839 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437852 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437867 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437881 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437897 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437911 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437925 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437940 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437954 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437969 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437981 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.437995 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.438006 4756 reconstruct.go:97] "Volume reconstruction finished" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.438017 4756 reconciler.go:26] "Reconciler: start to sync state" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.457712 4756 manager.go:324] Recovery completed Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.468427 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.471983 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.472121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.472148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.472178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.473073 4756 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.473122 4756 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.473150 4756 state_mem.go:36] "Initialized new in-memory state store" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.474233 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.474282 4756 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.474315 4756 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.474476 4756 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.478752 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.478862 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.500428 4756 policy_none.go:49] "None policy: Start" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.501620 4756 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.501660 4756 state_mem.go:35] "Initializing new in-memory state store" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.520485 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.559288 4756 manager.go:334] "Starting Device Plugin manager" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.559445 4756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.559462 4756 server.go:79] "Starting device plugin registration server" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.559870 4756 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.559883 4756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.560277 4756 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.560343 4756 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.560350 4756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.566008 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.575403 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.575658 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.577487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.577533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.577541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.577662 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.577945 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.578042 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579665 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579775 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.579821 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581589 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581866 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.581943 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.582782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.582820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.582835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.582969 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583648 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583734 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.583985 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.584012 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.584779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.584820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.584829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.585041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.585074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.585088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.622197 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.640914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.640966 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641000 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641030 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641114 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641177 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641204 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641240 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641259 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641280 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641299 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641341 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641362 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641383 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.641409 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.661025 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.662358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.662407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.662423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.662452 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.662977 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743651 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743743 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.743994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744081 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744205 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744351 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744354 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744403 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744265 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744442 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744482 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744495 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744545 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744563 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744591 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744696 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744772 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744787 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744832 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744893 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744898 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.744946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.864040 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.865457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.865500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.865514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.865540 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:48 crc kubenswrapper[4756]: E1124 12:27:48.866212 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.931092 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.938321 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.953475 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.974110 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: I1124 12:27:48.979693 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:48 crc kubenswrapper[4756]: W1124 12:27:48.996450 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6211ae6df1acfb377299459777b3391b0e0a68a8d659f1a7016d5a36cb7eb8ce WatchSource:0}: Error finding container 6211ae6df1acfb377299459777b3391b0e0a68a8d659f1a7016d5a36cb7eb8ce: Status 404 returned error can't find the container with id 6211ae6df1acfb377299459777b3391b0e0a68a8d659f1a7016d5a36cb7eb8ce Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.031385 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.031526 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b187bfeb63410b309700bb306435cfc89d49ca6a47d52e3130d547f37a258e42 WatchSource:0}: Error finding container b187bfeb63410b309700bb306435cfc89d49ca6a47d52e3130d547f37a258e42: Status 404 returned error can't find the container with id b187bfeb63410b309700bb306435cfc89d49ca6a47d52e3130d547f37a258e42 Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.033119 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a5edd9021a2c0364f02468cea885828203705b03db84eb2e0bd3c6f90100ce0a WatchSource:0}: Error finding container a5edd9021a2c0364f02468cea885828203705b03db84eb2e0bd3c6f90100ce0a: Status 404 returned error can't find the container with id a5edd9021a2c0364f02468cea885828203705b03db84eb2e0bd3c6f90100ce0a Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.034948 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d6dfe79add91fa9a8a14a34f2586c7011162cd3162e72cd9eb89f892f950b5d2 WatchSource:0}: Error finding container d6dfe79add91fa9a8a14a34f2586c7011162cd3162e72cd9eb89f892f950b5d2: Status 404 returned error can't find the container with id d6dfe79add91fa9a8a14a34f2586c7011162cd3162e72cd9eb89f892f950b5d2 Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.036209 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-02f6c2e007d280ec3239af8811ff6990c58d792049f3c0c375d2adec01907bb7 WatchSource:0}: Error finding container 02f6c2e007d280ec3239af8811ff6990c58d792049f3c0c375d2adec01907bb7: Status 404 returned error can't find the container with id 02f6c2e007d280ec3239af8811ff6990c58d792049f3c0c375d2adec01907bb7 Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.266629 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.268846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.268904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.268923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.268961 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.269630 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.346131 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.346250 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.413938 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.421069 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:01:30.571663055 +0000 UTC Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.421148 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 988h33m41.150518375s for next certificate rotation Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.478729 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b187bfeb63410b309700bb306435cfc89d49ca6a47d52e3130d547f37a258e42"} Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.480066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6211ae6df1acfb377299459777b3391b0e0a68a8d659f1a7016d5a36cb7eb8ce"} Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.482840 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"02f6c2e007d280ec3239af8811ff6990c58d792049f3c0c375d2adec01907bb7"} Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.484577 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6dfe79add91fa9a8a14a34f2586c7011162cd3162e72cd9eb89f892f950b5d2"} Nov 24 12:27:49 crc kubenswrapper[4756]: I1124 12:27:49.485456 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a5edd9021a2c0364f02468cea885828203705b03db84eb2e0bd3c6f90100ce0a"} Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.565568 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.565656 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.662952 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.663047 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.832896 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Nov 24 12:27:49 crc kubenswrapper[4756]: W1124 12:27:49.972013 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:49 crc kubenswrapper[4756]: E1124 12:27:49.972093 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.070449 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.072720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.072779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.072792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.072835 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:50 crc kubenswrapper[4756]: E1124 12:27:50.073686 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.414113 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.498531 4756 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300" exitCode=0 Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.498791 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.498780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.499878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.499918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.499934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.503604 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.503826 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.503868 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.503881 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.503893 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.504968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.504997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.505008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.508223 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9" exitCode=0 Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.508287 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.508378 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.509961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.510005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.510021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.511543 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512126 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b" exitCode=0 Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512223 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512403 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.512765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.513437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.513461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.513473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.514989 4756 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9a9931ac164b4c9abc52225b6a924779955d8127f06a9da3517ec80fdfe785dc" exitCode=0 Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.515046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9a9931ac164b4c9abc52225b6a924779955d8127f06a9da3517ec80fdfe785dc"} Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.515209 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.516614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.516661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.516682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.922022 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:50 crc kubenswrapper[4756]: I1124 12:27:50.934496 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.412947 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:51 crc kubenswrapper[4756]: E1124 12:27:51.434056 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.524407 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.524470 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.524483 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.524492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.526197 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.526111 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99" exitCode=0 Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.526185 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.527391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.527425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.527435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.530912 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.530915 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d955897970d744bb09014afb39e243d4759c0e4148b9d72564033687b8e652d0"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.532483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.532526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.532538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.534818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.534843 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.534854 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.534867 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1"} Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.534935 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.535752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.535786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.535798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.536571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.536596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.536608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.673852 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.675088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.675127 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.675140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:51 crc kubenswrapper[4756]: I1124 12:27:51.675190 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:51 crc kubenswrapper[4756]: E1124 12:27:51.675612 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Nov 24 12:27:51 crc kubenswrapper[4756]: W1124 12:27:51.724697 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:51 crc kubenswrapper[4756]: E1124 12:27:51.724792 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:51 crc kubenswrapper[4756]: W1124 12:27:51.802866 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Nov 24 12:27:51 crc kubenswrapper[4756]: E1124 12:27:51.802944 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.542726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb"} Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.542879 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.544261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.544293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.544304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.546816 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1" exitCode=0 Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.546936 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.546887 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1"} Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.547020 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.547065 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.547135 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.547199 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.547211 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.548729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549652 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:52 crc kubenswrapper[4756]: I1124 12:27:52.549670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.068604 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.396585 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.558575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266"} Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.558710 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.559423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0"} Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.559477 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca"} Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.559491 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2"} Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.558726 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560743 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.560961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:53 crc kubenswrapper[4756]: I1124 12:27:53.828900 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.568517 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428"} Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.568619 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.568620 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.569707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.569783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.569809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.569979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.570006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.570017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.876261 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.878985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.879044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.879064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.879099 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:27:54 crc kubenswrapper[4756]: I1124 12:27:54.941225 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.571587 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.571668 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:55 crc kubenswrapper[4756]: I1124 12:27:55.572936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.539282 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.539468 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.540544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.540644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.540723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.571422 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.573246 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.573409 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:56 crc kubenswrapper[4756]: I1124 12:27:56.574778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:57 crc kubenswrapper[4756]: I1124 12:27:57.578731 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 12:27:57 crc kubenswrapper[4756]: I1124 12:27:57.578985 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:57 crc kubenswrapper[4756]: I1124 12:27:57.580330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:57 crc kubenswrapper[4756]: I1124 12:27:57.580415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:57 crc kubenswrapper[4756]: I1124 12:27:57.580460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:58 crc kubenswrapper[4756]: E1124 12:27:58.566108 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 12:27:58 crc kubenswrapper[4756]: I1124 12:27:58.993702 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:58 crc kubenswrapper[4756]: I1124 12:27:58.993910 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:58 crc kubenswrapper[4756]: I1124 12:27:58.995405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:58 crc kubenswrapper[4756]: I1124 12:27:58.995444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:58 crc kubenswrapper[4756]: I1124 12:27:58.995456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.662875 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.663063 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.664792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.664863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.664882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:27:59 crc kubenswrapper[4756]: I1124 12:27:59.668690 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:28:00 crc kubenswrapper[4756]: I1124 12:28:00.584673 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:28:00 crc kubenswrapper[4756]: I1124 12:28:00.586335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:00 crc kubenswrapper[4756]: I1124 12:28:00.586403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:00 crc kubenswrapper[4756]: I1124 12:28:00.586420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:01 crc kubenswrapper[4756]: I1124 12:28:01.993789 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 12:28:01 crc kubenswrapper[4756]: I1124 12:28:01.993895 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 12:28:02 crc kubenswrapper[4756]: W1124 12:28:02.294835 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 12:28:02 crc kubenswrapper[4756]: I1124 12:28:02.294979 4756 trace.go:236] Trace[1169646985]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 12:27:52.293) (total time: 10001ms): Nov 24 12:28:02 crc kubenswrapper[4756]: Trace[1169646985]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:28:02.294) Nov 24 12:28:02 crc kubenswrapper[4756]: Trace[1169646985]: [10.001238952s] [10.001238952s] END Nov 24 12:28:02 crc kubenswrapper[4756]: E1124 12:28:02.295007 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 12:28:02 crc kubenswrapper[4756]: E1124 12:28:02.358030 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187af11319faebf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 12:27:48.41151794 +0000 UTC m=+0.769032122,LastTimestamp:2025-11-24 12:27:48.41151794 +0000 UTC m=+0.769032122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 12:28:02 crc kubenswrapper[4756]: I1124 12:28:02.410345 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 12:28:02 crc kubenswrapper[4756]: I1124 12:28:02.410427 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 12:28:02 crc kubenswrapper[4756]: I1124 12:28:02.424622 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 24 12:28:02 crc kubenswrapper[4756]: I1124 12:28:02.424699 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.951292 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.951560 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.953060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.953126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.953144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:04 crc kubenswrapper[4756]: I1124 12:28:04.956485 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:28:05 crc kubenswrapper[4756]: I1124 12:28:05.599265 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:28:05 crc kubenswrapper[4756]: I1124 12:28:05.599340 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:28:05 crc kubenswrapper[4756]: I1124 12:28:05.600887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:05 crc kubenswrapper[4756]: I1124 12:28:05.600977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:05 crc kubenswrapper[4756]: I1124 12:28:05.601011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.600961 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.601175 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.605125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.605200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.605240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.618750 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 12:28:06 crc kubenswrapper[4756]: I1124 12:28:06.941286 4756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 12:28:07 crc kubenswrapper[4756]: E1124 12:28:07.412703 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.415929 4756 trace.go:236] Trace[1810288896]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 12:27:52.701) (total time: 14714ms): Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[1810288896]: ---"Objects listed" error: 14714ms (12:28:07.415) Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[1810288896]: [14.714570637s] [14.714570637s] END Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.415978 4756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.418738 4756 trace.go:236] Trace[794100279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 12:27:55.548) (total time: 11869ms): Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[794100279]: ---"Objects listed" error: 11869ms (12:28:07.418) Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[794100279]: [11.869905601s] [11.869905601s] END Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.418784 4756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.418910 4756 trace.go:236] Trace[1623329241]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 12:27:56.119) (total time: 11299ms): Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[1623329241]: ---"Objects listed" error: 11299ms (12:28:07.418) Nov 24 12:28:07 crc kubenswrapper[4756]: Trace[1623329241]: [11.299803015s] [11.299803015s] END Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.418943 4756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 12:28:07 crc kubenswrapper[4756]: E1124 12:28:07.419445 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.419874 4756 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.665956 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42312->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.666035 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42312->192.168.126.11:17697: read: connection reset by peer" Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.666485 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 12:28:07 crc kubenswrapper[4756]: I1124 12:28:07.666559 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.412574 4756 apiserver.go:52] "Watching apiserver" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.417548 4756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.417861 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.418195 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.418244 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.418310 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.418400 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.418488 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.418751 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.419510 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.419698 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.419752 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.421320 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.421404 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.421977 4756 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.424984 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426205 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426246 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426270 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426294 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426318 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426368 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426392 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426415 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426436 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426458 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426503 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426527 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426551 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426573 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426615 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426636 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426747 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426838 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426905 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426930 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.426987 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427033 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427050 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427384 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427415 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430624 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430688 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430738 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430764 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430789 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430822 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430873 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430906 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430938 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430976 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431009 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431039 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431065 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431096 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431121 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431173 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431208 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431229 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431251 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431292 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431320 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431341 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431365 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431397 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431427 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431456 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431488 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431512 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431532 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431556 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431578 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431597 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431618 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431643 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431822 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431989 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432103 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432239 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432332 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432385 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432544 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432744 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433171 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433362 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433555 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433597 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433705 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433838 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433887 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433911 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.434016 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427216 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427207 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.427281 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430491 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.434494 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.434065 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.435202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430758 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430841 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.430967 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431050 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431058 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431243 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431342 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431482 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.431586 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432238 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432465 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432511 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432741 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.432948 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.433238 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.436397 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.438768 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.438861 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.438959 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439059 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439189 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439538 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439749 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439763 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439876 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439881 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.439997 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440228 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440288 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440499 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440509 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440652 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440652 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440724 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440764 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440861 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.440994 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441007 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441034 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441075 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441111 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441117 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441212 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441365 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441421 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441457 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441486 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441515 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441541 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441556 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441566 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441661 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441697 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441727 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441740 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441756 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441833 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441863 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441983 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442022 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442054 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442084 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442117 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442148 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442227 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442255 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442322 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442352 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442382 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442410 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442440 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442477 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442504 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442537 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442565 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442623 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442652 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442712 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442742 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442771 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442801 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442832 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442864 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442897 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442927 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442961 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442991 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443022 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443081 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443108 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443135 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443210 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443238 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443270 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443302 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443335 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443366 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443393 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443425 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443486 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443514 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443543 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443573 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443601 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443631 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443656 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443683 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443712 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443738 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443766 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443789 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443813 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443843 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443881 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443924 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443951 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443983 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444015 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444042 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444069 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444100 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444151 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444202 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444226 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444256 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441750 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444309 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444359 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444386 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444406 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444420 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441770 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441936 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.441979 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442005 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442857 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.442993 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443096 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443307 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443365 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443523 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443815 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.443895 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444068 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444244 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444578 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444667 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444679 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444850 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444811 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.444493 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449515 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445148 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445224 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445398 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445493 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.445903 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.446026 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.446205 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.446262 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.446851 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.447206 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.447268 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.447891 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.447932 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.448089 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.448432 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.448630 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449066 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449110 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449497 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449726 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449766 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449794 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449817 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449839 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449864 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449909 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449928 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449943 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.449980 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450013 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450042 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450070 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450102 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450136 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450186 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450216 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450233 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450245 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450276 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450380 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450409 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450443 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450527 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450570 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450601 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450673 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450691 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450716 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450708 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450732 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450760 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450798 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450864 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450887 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.450914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451006 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451441 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451477 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451504 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451528 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451668 4756 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451691 4756 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451708 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451725 4756 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451740 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451753 4756 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451764 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451774 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451772 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451785 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451835 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451854 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451871 4756 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451888 4756 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451903 4756 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451920 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451936 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451961 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.451976 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452006 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452024 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452039 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452055 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452071 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452086 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452100 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452118 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452132 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452169 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452184 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452198 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452210 4756 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452223 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452236 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452248 4756 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452264 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452280 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452298 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452316 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452359 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452377 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452395 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452409 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452422 4756 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452436 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452449 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452463 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452477 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452491 4756 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452505 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452518 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452532 4756 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452545 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452558 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452574 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452587 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452601 4756 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452619 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452632 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452647 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452660 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452673 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452686 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452710 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452726 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452738 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452752 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452887 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453186 4756 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453206 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453224 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453228 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453238 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453253 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453268 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453285 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453304 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453318 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453331 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453346 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453360 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453375 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453388 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453422 4756 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.460883 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452037 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452252 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452568 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452791 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.452965 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.453192 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.455233 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.455300 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.455509 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.455951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456025 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456117 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.456191 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456350 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456683 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456764 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.456937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457027 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457066 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457233 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457476 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457507 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.457969 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.458452 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.458885 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.459122 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.460286 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.460595 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.460717 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.461112 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.461795 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.461964 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.492914 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.462230 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.462276 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.462701 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463004 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463240 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463418 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463800 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.463996 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.464222 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.464486 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.466227 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.466961 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467070 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.467402 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:08.96736517 +0000 UTC m=+21.324879312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.493219 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:08.9931933 +0000 UTC m=+21.350707442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467593 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.467610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.468087 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.468150 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.493319 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.468524 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.472655 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.472674 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.472923 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.475254 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.476007 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.476627 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.480769 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.482533 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.483703 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.488048 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.488645 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.488691 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.489020 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.489396 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.489444 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.489951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.490227 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.491503 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.491632 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.491843 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.491974 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.494776 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.495093 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.495627 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.496405 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.496546 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:08.996513756 +0000 UTC m=+21.354028088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.496423 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.496694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.496816 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.496919 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.496982 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.497071 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:08.99705714 +0000 UTC m=+21.354571282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.498467 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.498773 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.499507 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.500048 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.500765 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.501509 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.502147 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.505477 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.507699 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.510783 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.511804 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.517843 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.519529 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.522628 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.522674 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.522696 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:08 crc kubenswrapper[4756]: E1124 12:28:08.522794 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:09.022760946 +0000 UTC m=+21.380275088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.522932 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.526794 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.527842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.528549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.538970 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.539784 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.542577 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.544858 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.547232 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.548243 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.551460 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.552258 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: W1124 12:28:08.552944 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-08ca192200786382fae3f80d723ffef5a745f97e59be6e570bf8584fc69c4aef WatchSource:0}: Error finding container 08ca192200786382fae3f80d723ffef5a745f97e59be6e570bf8584fc69c4aef: Status 404 returned error can't find the container with id 08ca192200786382fae3f80d723ffef5a745f97e59be6e570bf8584fc69c4aef Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554403 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554540 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554556 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554570 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554584 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554596 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554610 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554622 4756 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554633 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554644 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554658 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554669 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554682 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554696 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554712 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554724 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554912 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.554960 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.555340 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557089 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557192 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557210 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557219 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557229 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557242 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557251 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557261 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557272 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557281 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557290 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557300 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557310 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557325 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557342 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557356 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557366 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557376 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557385 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557395 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557404 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557421 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557438 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557454 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557465 4756 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557479 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557490 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557501 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557510 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557552 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557569 4756 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557582 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557596 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557608 4756 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557617 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557628 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557640 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557651 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557661 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557671 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557681 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557691 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557701 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557720 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557736 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557749 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557762 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557772 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557786 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557797 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557805 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557815 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557824 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557833 4756 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557842 4756 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557851 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557861 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557874 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557884 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557899 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557910 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557919 4756 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557929 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557939 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557949 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557959 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.557969 4756 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558027 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558038 4756 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558048 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558057 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558067 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558077 4756 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558087 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558097 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558107 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558115 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558125 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558135 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558146 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558171 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558181 4756 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558191 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558201 4756 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558212 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558229 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558239 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558249 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558258 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558269 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558278 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558289 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558299 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558308 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558321 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.558333 4756 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.559564 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.560406 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.560856 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.563403 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.564848 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.565437 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.566580 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.567124 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.568147 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.570028 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.571177 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.572543 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.573116 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.574043 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.575181 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.575686 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.577328 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.578120 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.578897 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.578174 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.580598 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.581594 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.582117 4756 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.582319 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.590102 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.591284 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.596872 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.596942 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.598634 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.601383 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.602435 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.613388 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.615094 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.617077 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.617827 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.623283 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.626793 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.627246 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.628181 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.629288 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.631587 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.632728 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.636726 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.637321 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.647316 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.647901 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.648586 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.649659 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.650107 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.650942 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08ca192200786382fae3f80d723ffef5a745f97e59be6e570bf8584fc69c4aef"} Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.652431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.661280 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.667008 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb" exitCode=255 Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.667830 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb"} Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.670798 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.695967 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.727148 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.740408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.756016 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.761993 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.764214 4756 scope.go:117] "RemoveContainer" containerID="60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.780275 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.786248 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.800548 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.810630 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.819625 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.829189 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.850554 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.850710 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: I1124 12:28:08.868363 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:08 crc kubenswrapper[4756]: W1124 12:28:08.884523 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c1dcd4dad00418d7a990f78c7848c8698e24725d2bd6fd5ca78df22de34666d1 WatchSource:0}: Error finding container c1dcd4dad00418d7a990f78c7848c8698e24725d2bd6fd5ca78df22de34666d1: Status 404 returned error can't find the container with id c1dcd4dad00418d7a990f78c7848c8698e24725d2bd6fd5ca78df22de34666d1 Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.001353 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.006236 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.010668 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.027259 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.050261 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.065249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.065339 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.065362 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065442 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:10.06540821 +0000 UTC m=+22.422922352 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065460 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065511 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.065586 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065631 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:10.065603155 +0000 UTC m=+22.423117297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.065675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065728 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065745 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065763 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065807 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:10.06579705 +0000 UTC m=+22.423311402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065520 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065825 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065861 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:10.065848391 +0000 UTC m=+22.423362753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065862 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.065903 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:10.065895343 +0000 UTC m=+22.423409695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.070365 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.100176 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.100849 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.131649 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.145717 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.167180 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.183275 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.197894 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.218183 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.233039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.243469 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.258684 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.274859 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.291552 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.317103 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.386147 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bqhbk"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.386825 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.389023 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.389261 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.389609 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-66bwb"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.389784 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.390342 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.390488 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.390745 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8p8dh"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.390958 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-h8ht2"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.391335 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.391501 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.391530 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.391972 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hnsz7"] Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.392859 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.399344 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.399506 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.399521 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.399985 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.400538 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.400547 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.400700 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.401000 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.401528 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.401662 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.402426 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.402852 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.402872 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.402924 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.405272 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.407033 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.407065 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.417540 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.444329 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469817 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469864 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469890 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469909 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cnibin\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.469969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-socket-dir-parent\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470000 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-cni-binary-copy\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470095 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470169 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-multus\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470200 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-conf-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470228 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470252 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-os-release\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470302 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470318 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-hostroot\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470333 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44lq\" (UniqueName: \"kubernetes.io/projected/077d4abb-b72e-499f-98c2-628720d701dc-kube-api-access-x44lq\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470377 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-proxy-tls\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lv6\" (UniqueName: \"kubernetes.io/projected/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-kube-api-access-z9lv6\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470412 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470427 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470443 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-netns\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470457 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-multus-certs\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470476 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470490 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-system-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470504 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-etc-kubernetes\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470524 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470554 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470570 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-rootfs\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470605 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsdd\" (UniqueName: \"kubernetes.io/projected/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-kube-api-access-kmsdd\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-hosts-file\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470647 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-k8s-cni-cncf-io\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-cnibin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470686 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470701 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-multus-daemon-config\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470718 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-bin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470735 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-kubelet\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470750 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470772 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fpx\" (UniqueName: \"kubernetes.io/projected/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-kube-api-access-j8fpx\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470788 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zw8x\" (UniqueName: \"kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470803 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470817 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470832 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470867 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-system-cni-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470882 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-os-release\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470919 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.470939 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-mcd-auth-proxy-config\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.476668 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.531545 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571903 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-netns\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571921 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-multus-certs\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571939 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-etc-kubernetes\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571957 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571975 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-system-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.571991 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsdd\" (UniqueName: \"kubernetes.io/projected/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-kube-api-access-kmsdd\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572081 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572097 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572113 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-rootfs\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-hosts-file\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572191 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-cnibin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572209 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-k8s-cni-cncf-io\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572225 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572243 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-multus-daemon-config\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572261 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572278 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8fpx\" (UniqueName: \"kubernetes.io/projected/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-kube-api-access-j8fpx\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zw8x\" (UniqueName: \"kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-bin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572329 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-kubelet\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572366 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572385 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572401 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572418 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-mcd-auth-proxy-config\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.572439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-system-cni-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573284 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-os-release\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-bin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573323 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573392 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-hosts-file\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-kubelet\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573415 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573418 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573512 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-cnibin\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573571 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cnibin\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573674 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573698 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-cni-binary-copy\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573723 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-socket-dir-parent\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573754 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-multus\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573806 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-conf-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573826 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573858 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573883 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-os-release\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573889 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-system-cni-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.573909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lv6\" (UniqueName: \"kubernetes.io/projected/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-kube-api-access-z9lv6\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574149 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574191 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574436 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-hostroot\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-k8s-cni-cncf-io\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574470 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44lq\" (UniqueName: \"kubernetes.io/projected/077d4abb-b72e-499f-98c2-628720d701dc-kube-api-access-x44lq\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574552 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-proxy-tls\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574748 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574852 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-mcd-auth-proxy-config\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574857 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.574992 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-multus-daemon-config\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575051 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cnibin\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575086 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575129 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575262 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-var-lib-cni-multus\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575350 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-netns\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575381 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-conf-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575481 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-host-run-multus-certs\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575602 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-etc-kubernetes\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575680 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-os-release\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575765 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575778 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-system-cni-dir\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575897 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-hostroot\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.575937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576051 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576072 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576599 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576716 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576720 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-rootfs\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576852 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/077d4abb-b72e-499f-98c2-628720d701dc-multus-socket-dir-parent\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.576991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.577123 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.577143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-os-release\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.578285 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.577999 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.579898 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.584196 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/077d4abb-b72e-499f-98c2-628720d701dc-cni-binary-copy\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.599528 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-proxy-tls\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.600677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.630295 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.644529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.665798 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.671478 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.671531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.673108 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c1dcd4dad00418d7a990f78c7848c8698e24725d2bd6fd5ca78df22de34666d1"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.674531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.674589 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8c248f4a954537ed54723e333ff5b3afcb1277b2ea167d15e6e69a830423a5c7"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.678490 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.680545 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed"} Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.680955 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.681223 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: E1124 12:28:09.686586 4756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.698262 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lv6\" (UniqueName: \"kubernetes.io/projected/7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8-kube-api-access-z9lv6\") pod \"node-resolver-h8ht2\" (UID: \"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\") " pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.698377 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zw8x\" (UniqueName: \"kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x\") pod \"ovnkube-node-hnsz7\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.699494 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsdd\" (UniqueName: \"kubernetes.io/projected/f0f50ecd-811f-4df2-ae0c-83a787d6cbec-kube-api-access-kmsdd\") pod \"machine-config-daemon-8p8dh\" (UID: \"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\") " pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.700115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44lq\" (UniqueName: \"kubernetes.io/projected/077d4abb-b72e-499f-98c2-628720d701dc-kube-api-access-x44lq\") pod \"multus-66bwb\" (UID: \"077d4abb-b72e-499f-98c2-628720d701dc\") " pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.700302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8fpx\" (UniqueName: \"kubernetes.io/projected/3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e-kube-api-access-j8fpx\") pod \"multus-additional-cni-plugins-bqhbk\" (UID: \"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\") " pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.702489 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.706147 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.714592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-66bwb" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.718062 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.719689 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h8ht2" Nov 24 12:28:09 crc kubenswrapper[4756]: W1124 12:28:09.719706 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60bc5508_89b8_4cc3_a0d6_e30abed70f05.slice/crio-7aab239b9fa2235f71e0cbe265d27742ad38c7d617b5a69f8ebd8883a162a125 WatchSource:0}: Error finding container 7aab239b9fa2235f71e0cbe265d27742ad38c7d617b5a69f8ebd8883a162a125: Status 404 returned error can't find the container with id 7aab239b9fa2235f71e0cbe265d27742ad38c7d617b5a69f8ebd8883a162a125 Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.724133 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.734798 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: W1124 12:28:09.741269 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077d4abb_b72e_499f_98c2_628720d701dc.slice/crio-d5fdd5b22a2a566bb7671cdd356e1ce3e500ffeff39771745fe59164f279b935 WatchSource:0}: Error finding container d5fdd5b22a2a566bb7671cdd356e1ce3e500ffeff39771745fe59164f279b935: Status 404 returned error can't find the container with id d5fdd5b22a2a566bb7671cdd356e1ce3e500ffeff39771745fe59164f279b935 Nov 24 12:28:09 crc kubenswrapper[4756]: W1124 12:28:09.749294 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f50ecd_811f_4df2_ae0c_83a787d6cbec.slice/crio-02a8f30c5da0db4cc9a9fab622469300b822757094da3c56146d6e85d59a4d5c WatchSource:0}: Error finding container 02a8f30c5da0db4cc9a9fab622469300b822757094da3c56146d6e85d59a4d5c: Status 404 returned error can't find the container with id 02a8f30c5da0db4cc9a9fab622469300b822757094da3c56146d6e85d59a4d5c Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.760360 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.796181 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.811185 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.842449 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.862522 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.879408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.894508 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.909735 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.931870 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.949983 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.970079 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:09 crc kubenswrapper[4756]: I1124 12:28:09.983682 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:09Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.001172 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" Nov 24 12:28:10 crc kubenswrapper[4756]: W1124 12:28:10.014334 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f22a5d1_b6e3_47e7_84de_f3d56e3eb50e.slice/crio-639e4e6c6d0f0435fd962e26fa85af8f9f0d17a5e36371516e2fba9f5821d5ef WatchSource:0}: Error finding container 639e4e6c6d0f0435fd962e26fa85af8f9f0d17a5e36371516e2fba9f5821d5ef: Status 404 returned error can't find the container with id 639e4e6c6d0f0435fd962e26fa85af8f9f0d17a5e36371516e2fba9f5821d5ef Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.080550 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.080732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.080773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.080796 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.080821 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.080903 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:12.080865506 +0000 UTC m=+24.438379648 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.080965 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081037 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:12.08101763 +0000 UTC m=+24.438531772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081046 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081085 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:12.081077321 +0000 UTC m=+24.438591703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081185 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081207 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081192 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081225 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081234 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081250 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081270 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:12.081261056 +0000 UTC m=+24.438775198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.081287 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:12.081277366 +0000 UTC m=+24.438791748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.482370 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.482477 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.482535 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.482574 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.482611 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:10 crc kubenswrapper[4756]: E1124 12:28:10.482648 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.484432 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.684665 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h8ht2" event={"ID":"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8","Type":"ContainerStarted","Data":"317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.684726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h8ht2" event={"ID":"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8","Type":"ContainerStarted","Data":"b863a555b1cbbcb7f4008c48523c84ffb9967e034ae81668501ac2b43c173c1a"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.688071 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe" exitCode=0 Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.688179 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.688215 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerStarted","Data":"639e4e6c6d0f0435fd962e26fa85af8f9f0d17a5e36371516e2fba9f5821d5ef"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.690707 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.690760 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.690776 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"02a8f30c5da0db4cc9a9fab622469300b822757094da3c56146d6e85d59a4d5c"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.692517 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerStarted","Data":"e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.692563 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerStarted","Data":"d5fdd5b22a2a566bb7671cdd356e1ce3e500ffeff39771745fe59164f279b935"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.694142 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" exitCode=0 Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.694833 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.694869 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"7aab239b9fa2235f71e0cbe265d27742ad38c7d617b5a69f8ebd8883a162a125"} Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.725882 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.757804 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.773705 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.788755 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.806169 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.822662 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.837581 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.849769 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.864893 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.881555 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.899302 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.943596 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.962756 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:10 crc kubenswrapper[4756]: I1124 12:28:10.988133 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:10Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.013920 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.028272 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.051091 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.066342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.086177 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.105088 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.118395 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.133198 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.148908 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.164360 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.177658 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.195290 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.210838 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.227733 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.699794 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.700268 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.701547 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6"} Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.703755 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8" exitCode=0 Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.703790 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8"} Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.716518 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.737850 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.765956 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.785215 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.800957 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.813934 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.827535 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.846893 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.858604 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.873403 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.890351 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.904338 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.922398 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.934644 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.952086 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.965350 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.983826 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:11 crc kubenswrapper[4756]: I1124 12:28:11.999634 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:11Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.014541 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.032861 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.047006 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.058095 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.068998 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.083827 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.099490 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.103993 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.104094 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.104143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.104190 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.104217 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104359 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104388 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104401 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104449 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:16.104433355 +0000 UTC m=+28.461947497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104511 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:16.104503627 +0000 UTC m=+28.462017769 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104565 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104578 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104589 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104615 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:16.10460712 +0000 UTC m=+28.462121262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104656 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104684 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:16.104676512 +0000 UTC m=+28.462190654 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104726 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.104782 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:16.104766594 +0000 UTC m=+28.462280736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.119524 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.144584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.158690 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wbl2t"] Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.159317 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.159513 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.161870 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.163320 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.163680 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.166766 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.175787 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.199227 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.212728 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.223708 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.239919 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.257059 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.306263 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-host\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.306312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvvn\" (UniqueName: \"kubernetes.io/projected/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-kube-api-access-fmvvn\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.306340 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-serviceca\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.309573 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.326446 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.352284 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.367462 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.406662 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.407489 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-host\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.407561 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmvvn\" (UniqueName: \"kubernetes.io/projected/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-kube-api-access-fmvvn\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.407602 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-host\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.407625 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-serviceca\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.410116 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-serviceca\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.459735 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmvvn\" (UniqueName: \"kubernetes.io/projected/98bf97ea-6f41-4eb9-9e2c-fadff2d40af0-kube-api-access-fmvvn\") pod \"node-ca-wbl2t\" (UID: \"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\") " pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.468370 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.475308 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.475417 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.475471 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.475629 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.475739 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:12 crc kubenswrapper[4756]: E1124 12:28:12.475964 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.493342 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wbl2t" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.514298 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: W1124 12:28:12.515682 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98bf97ea_6f41_4eb9_9e2c_fadff2d40af0.slice/crio-de117214b62cd27a268b5c7b23c1caaacaa49d3f050ba656b9db72c68b0a25ac WatchSource:0}: Error finding container de117214b62cd27a268b5c7b23c1caaacaa49d3f050ba656b9db72c68b0a25ac: Status 404 returned error can't find the container with id de117214b62cd27a268b5c7b23c1caaacaa49d3f050ba656b9db72c68b0a25ac Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.552744 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.582902 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.708753 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wbl2t" event={"ID":"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0","Type":"ContainerStarted","Data":"de117214b62cd27a268b5c7b23c1caaacaa49d3f050ba656b9db72c68b0a25ac"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.713003 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.713042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.713052 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.713061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.715398 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207" exitCode=0 Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.715551 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207"} Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.729688 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.742808 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.753743 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.764042 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.784818 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.822503 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.864850 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.900540 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.942872 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:12 crc kubenswrapper[4756]: I1124 12:28:12.982134 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:12Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.024650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.063584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.119606 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.143943 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.195860 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.723649 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198" exitCode=0 Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.723749 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.726182 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wbl2t" event={"ID":"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0","Type":"ContainerStarted","Data":"213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.752012 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.778524 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.797509 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.811201 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.820044 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.822035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.822198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.822215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.822604 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.830693 4756 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.830916 4756 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.830980 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.832927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.832986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.832997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.833031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.833145 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.846885 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.850842 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.854807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.854846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.854858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.854877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.854891 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.866843 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.868215 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.875590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.876286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.876302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.876325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.876340 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.883127 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.889412 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.894143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.894198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.894211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.894233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.894246 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.897630 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.905636 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.909372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.909398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.909407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.909423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.909434 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.913278 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.921116 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: E1124 12:28:13.921288 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.922991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.923016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.923025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.923040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.923092 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:13Z","lastTransitionTime":"2025-11-24T12:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.927669 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.940793 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.953859 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.965949 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.977241 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:13 crc kubenswrapper[4756]: I1124 12:28:13.991140 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:13Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.010605 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.027415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.027490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.027502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.027520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.027531 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.033282 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.049171 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.062993 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.073927 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.105109 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.129470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.129770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.129850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.129915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.129978 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.145317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.183727 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.228514 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.232129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.232203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.232216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.232270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.232283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.265817 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.308727 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.334827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.335214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.335425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.335605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.335787 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.342238 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.384100 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.426470 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.438237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.438272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.438285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.438304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.438315 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.475067 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:14 crc kubenswrapper[4756]: E1124 12:28:14.479362 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.479720 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:14 crc kubenswrapper[4756]: E1124 12:28:14.479810 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.480009 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:14 crc kubenswrapper[4756]: E1124 12:28:14.480236 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.541359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.541406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.541419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.541438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.541477 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.643786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.643828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.643839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.643855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.643865 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.735138 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.738888 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22" exitCode=0 Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.738996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.746093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.746174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.746192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.746217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.746235 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.768697 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.790128 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.812416 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.823049 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.843401 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.848459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.848496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.848507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.848525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.848537 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.858958 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.881565 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.894776 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.907604 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.920856 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.932760 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.945675 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.950360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.950389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.950397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.950412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.950422 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:14Z","lastTransitionTime":"2025-11-24T12:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.956979 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:14 crc kubenswrapper[4756]: I1124 12:28:14.981768 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:14Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.022341 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.053428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.053466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.053474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.053489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.053500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.157802 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.157842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.157853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.157870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.157882 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.262713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.262787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.262810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.262840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.262863 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.369768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.369843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.369862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.369891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.369909 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.473656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.473708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.473724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.473745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.473756 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.576771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.576817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.576828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.576843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.576851 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.680014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.680064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.680079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.680102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.680117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.745077 4756 generic.go:334] "Generic (PLEG): container finished" podID="3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e" containerID="394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720" exitCode=0 Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.745133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerDied","Data":"394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.764036 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.779456 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.782123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.782223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.782252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.782282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.782299 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.791305 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.811810 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.828783 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.843744 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.858790 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.871166 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886213 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.886342 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.899441 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.914200 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.922910 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.942039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.956902 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.981383 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:15Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.989348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.989393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.989404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.989419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:15 crc kubenswrapper[4756]: I1124 12:28:15.989428 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:15Z","lastTransitionTime":"2025-11-24T12:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.093536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.093574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.093586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.093602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.093611 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.146298 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.146418 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.146461 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146525 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.146495084 +0000 UTC m=+36.504009256 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.146572 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146614 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.146636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146676 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.146658298 +0000 UTC m=+36.504172470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146732 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146781 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.146766831 +0000 UTC m=+36.504281013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146833 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146882 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146883 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146908 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146910 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146937 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.146984 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.146971746 +0000 UTC m=+36.504485918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.147008 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.146996497 +0000 UTC m=+36.504510669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.197184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.197236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.197250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.197270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.197281 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.301256 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.301295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.301308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.301329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.301346 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.404846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.404903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.404923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.404946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.404963 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.474736 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.474788 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.474824 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.474898 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.475013 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:16 crc kubenswrapper[4756]: E1124 12:28:16.475143 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.507842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.507910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.507927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.507953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.507971 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.611773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.611833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.611845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.611865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.611882 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.715555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.715939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.715951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.715969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.715981 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.756890 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" event={"ID":"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e","Type":"ContainerStarted","Data":"bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.772045 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.787246 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.801275 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.813106 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.818259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.818336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.818358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.818386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.818407 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.834291 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.847013 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.866238 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.886445 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.900519 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.913514 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.920888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.920917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.920926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.920946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.920958 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:16Z","lastTransitionTime":"2025-11-24T12:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.927676 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.945636 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.963184 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.979420 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:16 crc kubenswrapper[4756]: I1124 12:28:16.992186 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:16Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.022960 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.023000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.023015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.023031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.023041 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.126344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.126467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.126521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.126578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.126645 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.230423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.230836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.231015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.231342 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.231585 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.336624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.336695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.336715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.336745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.336765 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.440462 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.441466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.441762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.441894 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.442060 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.545609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.545708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.545726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.545752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.545767 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.648477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.648533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.648544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.648563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.648573 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.751577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.751627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.751636 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.751656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.751667 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.765076 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.765438 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.789714 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.799828 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.824066 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.843944 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.853912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.853942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.853963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.853981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.853994 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.862443 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.880732 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.896594 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.914512 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.929678 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.947758 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.956372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.956437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.956463 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.956496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.956521 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:17Z","lastTransitionTime":"2025-11-24T12:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.965281 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:17 crc kubenswrapper[4756]: I1124 12:28:17.984789 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:17.999885 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:17Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.014803 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.031512 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.052584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.059362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.059402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.059419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.059440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.059455 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.079931 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.101311 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.132928 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.149447 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.162793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.162851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.162864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.162884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.162896 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.168808 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.187555 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.203321 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.222418 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.238202 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.251454 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.266033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.266083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.266091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.266107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.266119 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.270949 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.288497 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.306429 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.321886 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.341831 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.369964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.370002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.370011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.370029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.370041 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.474977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.476027 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475255 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475316 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:18 crc kubenswrapper[4756]: E1124 12:28:18.476728 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.475558 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:18 crc kubenswrapper[4756]: E1124 12:28:18.477567 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:18 crc kubenswrapper[4756]: E1124 12:28:18.477794 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.490951 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.504118 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.524408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.545706 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.562268 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.578420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.578496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.578546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.578568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.578583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.582461 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.602899 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.617747 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.632013 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.645092 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.659612 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.673650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.682477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.682786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.682801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.682820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.682833 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.684640 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.698075 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.709056 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.769755 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.772189 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.786697 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.787096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.787113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.787137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.787175 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.799675 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.814731 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.824317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.840359 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.854808 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.872262 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.886377 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.889972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.890015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.890024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.890042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.890051 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.908977 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.930522 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.951905 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.978142 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.993354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.993388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.993397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.993414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:18 crc kubenswrapper[4756]: I1124 12:28:18.993423 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:18Z","lastTransitionTime":"2025-11-24T12:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.003837 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:18Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.021674 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.040772 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.060679 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.073617 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.095589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.095647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.095656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.095675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.095686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.197870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.197947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.197966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.197996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.198017 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.300721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.300762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.300774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.300794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.300805 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.403461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.403515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.403527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.403547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.403559 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.508028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.508074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.508084 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.508098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.508107 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.610887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.610938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.610948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.610968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.610979 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.741832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.741874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.741888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.741908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.741923 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.778660 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/0.log" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.781957 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4" exitCode=1 Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.782011 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.782877 4756 scope.go:117] "RemoveContainer" containerID="0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.817592 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.836990 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.844545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.844596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.844615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.844644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.844663 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.868645 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:19Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 12:28:19.690480 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:19.690524 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 12:28:19.690566 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 12:28:19.690646 6028 factory.go:656] Stopping watch factory\\\\nI1124 12:28:19.690683 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 12:28:19.690700 6028 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690939 6028 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690704 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:19.691109 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 12:28:19.691457 6028 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691646 6028 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691762 6028 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.883968 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.901293 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.917561 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.932676 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.948223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.948276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.948294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.948320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.948347 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:19Z","lastTransitionTime":"2025-11-24T12:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.955952 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.977810 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:19 crc kubenswrapper[4756]: I1124 12:28:19.993029 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:19Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.008568 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.024943 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.035782 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.050387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.050436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.050448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.050468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.050480 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.052268 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.062915 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.154038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.154102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.154120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.154150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.154197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.257773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.257855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.257873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.257899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.257915 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.361207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.361249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.361261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.361279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.361290 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.464012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.464045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.464054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.464071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.464081 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.474856 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.474908 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:20 crc kubenswrapper[4756]: E1124 12:28:20.474971 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.474924 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:20 crc kubenswrapper[4756]: E1124 12:28:20.475042 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:20 crc kubenswrapper[4756]: E1124 12:28:20.475178 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.567238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.567276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.567286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.567305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.567316 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.669596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.669640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.669650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.669672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.669681 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.772923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.772964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.772973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.772990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.773002 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.817960 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/0.log" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.821614 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.821750 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.841098 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.851972 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.863508 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.874897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.874961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.874971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.875019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.875042 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.875397 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.887751 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.907006 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:19Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 12:28:19.690480 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:19.690524 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 12:28:19.690566 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 12:28:19.690646 6028 factory.go:656] Stopping watch factory\\\\nI1124 12:28:19.690683 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 12:28:19.690700 6028 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690939 6028 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690704 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:19.691109 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 12:28:19.691457 6028 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691646 6028 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691762 6028 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.928818 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.944933 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.965075 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.978135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.978211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.978223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.978243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.978260 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:20Z","lastTransitionTime":"2025-11-24T12:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:20 crc kubenswrapper[4756]: I1124 12:28:20.982914 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:20Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.003049 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.017846 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.027428 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.042298 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.068606 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.080968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.081023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.081041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.081065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.081082 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.183279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.183325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.183334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.183350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.183360 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.286038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.286075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.286088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.286107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.286117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.389916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.389971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.389988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.390015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.390033 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.493815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.493867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.493878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.493897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.493908 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.596041 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x"] Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597138 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.597452 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.599861 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.600089 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.638016 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.655849 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.685443 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:19Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 12:28:19.690480 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:19.690524 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 12:28:19.690566 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 12:28:19.690646 6028 factory.go:656] Stopping watch factory\\\\nI1124 12:28:19.690683 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 12:28:19.690700 6028 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690939 6028 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690704 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:19.691109 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 12:28:19.691457 6028 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691646 6028 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691762 6028 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.699457 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.700541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.700590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.700603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.700624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.700637 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.712571 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.724655 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.725952 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.726001 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcd9\" (UniqueName: \"kubernetes.io/projected/b70e3fcb-095c-48cb-8152-3a6a125d87e4-kube-api-access-pvcd9\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.726068 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.726095 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.734577 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.746218 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.770848 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.785049 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.800963 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.804380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.804435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.804453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.804475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.804494 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.813348 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827051 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827410 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvcd9\" (UniqueName: \"kubernetes.io/projected/b70e3fcb-095c-48cb-8152-3a6a125d87e4-kube-api-access-pvcd9\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827484 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.827579 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/1.log" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.828418 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/0.log" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.829363 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.829525 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.835339 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b70e3fcb-095c-48cb-8152-3a6a125d87e4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.835740 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" exitCode=1 Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.835778 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.835824 4756 scope.go:117] "RemoveContainer" containerID="0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.839713 4756 scope.go:117] "RemoveContainer" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" Nov 24 12:28:21 crc kubenswrapper[4756]: E1124 12:28:21.839987 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.847308 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.847743 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvcd9\" (UniqueName: \"kubernetes.io/projected/b70e3fcb-095c-48cb-8152-3a6a125d87e4-kube-api-access-pvcd9\") pod \"ovnkube-control-plane-749d76644c-f7x8x\" (UID: \"b70e3fcb-095c-48cb-8152-3a6a125d87e4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.859209 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.871209 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.882717 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.897235 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907378 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.907610 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:21Z","lastTransitionTime":"2025-11-24T12:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.917040 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.922277 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.946549 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.960836 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.982517 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e60a3dbd1436a8f53db8d6674bbe2a4b4898a41e48329f6e9a0c5568a407ac4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:19Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 12:28:19.690480 6028 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:19.690524 6028 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 12:28:19.690566 6028 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 12:28:19.690646 6028 factory.go:656] Stopping watch factory\\\\nI1124 12:28:19.690683 6028 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 12:28:19.690700 6028 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690939 6028 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.690704 6028 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:19.691109 6028 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 12:28:19.691457 6028 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691646 6028 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:19.691762 6028 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:21 crc kubenswrapper[4756]: I1124 12:28:21.997509 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:21Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.012709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.012773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.012782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.012799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.012809 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.014400 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.028752 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.045240 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.056505 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.071551 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.084759 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.098599 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118605 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.118241 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.220060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.220093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.220104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.220118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.220127 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.322092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.322145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.322173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.322195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.322208 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.424739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.424795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.424812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.424833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.424849 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.474868 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:22 crc kubenswrapper[4756]: E1124 12:28:22.475019 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.475330 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:22 crc kubenswrapper[4756]: E1124 12:28:22.475415 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.475591 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:22 crc kubenswrapper[4756]: E1124 12:28:22.475772 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.527467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.527506 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.527517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.527534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.527547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.630756 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.631006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.631031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.631057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.631075 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.735725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.735787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.735805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.735829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.735847 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.838745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.839868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.840047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.840314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.840493 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.840639 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/1.log" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.845354 4756 scope.go:117] "RemoveContainer" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" Nov 24 12:28:22 crc kubenswrapper[4756]: E1124 12:28:22.845995 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.849299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" event={"ID":"b70e3fcb-095c-48cb-8152-3a6a125d87e4","Type":"ContainerStarted","Data":"204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.849346 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" event={"ID":"b70e3fcb-095c-48cb-8152-3a6a125d87e4","Type":"ContainerStarted","Data":"30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.849359 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" event={"ID":"b70e3fcb-095c-48cb-8152-3a6a125d87e4","Type":"ContainerStarted","Data":"0b9bf035c5c37e3498601e63d958f4b5cfe3aa7b4bce37d052d63e4acf58a8e2"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.867937 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.884130 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.903529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.915643 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.943570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.943598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.943606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.943619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.943628 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:22Z","lastTransitionTime":"2025-11-24T12:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.949928 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.963909 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:22 crc kubenswrapper[4756]: I1124 12:28:22.994301 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:22Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.016424 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.038703 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.046532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.046835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.047017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.047225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.047423 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.056649 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.074721 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.076298 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.098775 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.121056 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.140583 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.150790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.150976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.150996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.151021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.151037 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.158921 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.172362 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.193263 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.209666 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.231611 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.245470 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.253284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.253348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.253358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.253374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.253383 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.266111 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.281556 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.299954 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.313097 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.328696 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.345814 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.356079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.356143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.356212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.356243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.356267 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.358426 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.371701 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.386117 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.402661 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.416307 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.432572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.459213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.459263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.459276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.459293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.459307 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.510628 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r955c"] Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.511188 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: E1124 12:28:23.511266 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.528331 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.546847 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.562383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.562428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.562439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.562470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.562483 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.564193 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.580089 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.593442 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.610622 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.629702 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.645260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.645309 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw8k\" (UniqueName: \"kubernetes.io/projected/6662f3ec-8806-4797-a7a5-f1606c4a54cf-kube-api-access-zvw8k\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.647056 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.659574 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.664996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.665031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.665044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.665060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.665072 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.676133 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.693034 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.715069 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.728751 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.746413 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.746511 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw8k\" (UniqueName: \"kubernetes.io/projected/6662f3ec-8806-4797-a7a5-f1606c4a54cf-kube-api-access-zvw8k\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: E1124 12:28:23.746696 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:23 crc kubenswrapper[4756]: E1124 12:28:23.746813 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:24.246786223 +0000 UTC m=+36.604300405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.761603 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.767666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.767700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.767711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.767727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.767738 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.773868 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw8k\" (UniqueName: \"kubernetes.io/projected/6662f3ec-8806-4797-a7a5-f1606c4a54cf-kube-api-access-zvw8k\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.779879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.802242 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.820359 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:23Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.870738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.870784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.870795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.870812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.870823 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.973851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.973896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.973911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.973929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:23 crc kubenswrapper[4756]: I1124 12:28:23.973940 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:23Z","lastTransitionTime":"2025-11-24T12:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.000612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.000680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.000693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.000712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.000726 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.013727 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.016866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.016898 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.016906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.016920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.016928 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.034346 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.037889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.037948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.037956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.037974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.037988 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.055312 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.059268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.059305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.059315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.059331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.059342 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.071886 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.076214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.076273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.076286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.076305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.076319 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.088816 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:24Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.088993 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.090422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.090464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.090475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.090491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.090503 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.151242 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.151380 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151425 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:28:40.151399887 +0000 UTC m=+52.508914029 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.151481 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151493 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151510 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151520 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.151537 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151562 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:40.151547981 +0000 UTC m=+52.509062123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.151580 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151598 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151633 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151645 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151648 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151653 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151634 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:40.151624633 +0000 UTC m=+52.509138775 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151721 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:40.151713075 +0000 UTC m=+52.509227217 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.151736 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:28:40.151728745 +0000 UTC m=+52.509243027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.193325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.193382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.193400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.193424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.193442 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.252819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.252986 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.253041 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:25.253026783 +0000 UTC m=+37.610540925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.296069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.296124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.296133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.296169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.296182 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.399202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.399262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.399279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.399302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.399321 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.474670 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.474710 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.474893 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.474942 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.475094 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:24 crc kubenswrapper[4756]: E1124 12:28:24.475330 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.502896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.502957 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.502979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.503009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.503030 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.605811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.605883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.605906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.605935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.605956 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.709358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.709402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.709412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.709431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.709442 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.813182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.813270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.813282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.813302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.813331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.916815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.916886 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.916897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.916918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:24 crc kubenswrapper[4756]: I1124 12:28:24.916932 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:24Z","lastTransitionTime":"2025-11-24T12:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.020659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.020744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.020779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.020803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.020819 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.123915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.123981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.123993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.124018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.124038 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.227255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.227294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.227305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.227321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.227329 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.264592 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:25 crc kubenswrapper[4756]: E1124 12:28:25.264799 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:25 crc kubenswrapper[4756]: E1124 12:28:25.264891 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:27.264868524 +0000 UTC m=+39.622382696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.329593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.329656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.329680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.329716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.329741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.433297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.433378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.433400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.433430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.433453 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.475532 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:25 crc kubenswrapper[4756]: E1124 12:28:25.475807 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.536401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.536464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.536475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.536493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.536507 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.640641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.640714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.640740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.640778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.640798 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.743925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.743987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.744000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.744023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.744038 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.847483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.847520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.847530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.847547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.847557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.951334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.951398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.951414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.951434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:25 crc kubenswrapper[4756]: I1124 12:28:25.951445 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:25Z","lastTransitionTime":"2025-11-24T12:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.054913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.054971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.055014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.055036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.055050 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.159233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.159314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.159340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.159378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.159408 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.262404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.262475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.262492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.262518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.262536 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.366217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.366287 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.366303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.366330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.366351 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.469877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.469920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.469929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.469946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.469958 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.475460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.475495 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:26 crc kubenswrapper[4756]: E1124 12:28:26.475702 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.475772 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:26 crc kubenswrapper[4756]: E1124 12:28:26.475987 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:26 crc kubenswrapper[4756]: E1124 12:28:26.476101 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.572691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.572759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.572778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.572800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.572814 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.677262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.677325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.677338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.677368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.677383 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.780860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.780933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.780947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.780967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.780978 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.884228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.884304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.884323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.884352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.884371 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.987375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.987451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.987462 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.987480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:26 crc kubenswrapper[4756]: I1124 12:28:26.987515 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:26Z","lastTransitionTime":"2025-11-24T12:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.093227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.093283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.093295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.093343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.093358 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.197566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.197719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.197728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.197753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.197772 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.288983 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:27 crc kubenswrapper[4756]: E1124 12:28:27.289293 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:27 crc kubenswrapper[4756]: E1124 12:28:27.289406 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:31.289384799 +0000 UTC m=+43.646898941 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.301132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.301237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.301255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.301279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.301295 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.404051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.404130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.404196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.404226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.404254 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.475553 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:27 crc kubenswrapper[4756]: E1124 12:28:27.475791 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.506989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.507028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.507037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.507050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.507060 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.610407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.610498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.610508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.610528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.610539 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.714247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.714306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.714318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.714339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.714351 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.816935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.817102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.817457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.817503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.817516 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.922703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.922808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.922827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.922857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:27 crc kubenswrapper[4756]: I1124 12:28:27.922885 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:27Z","lastTransitionTime":"2025-11-24T12:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.026369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.026420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.026442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.026475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.026496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.130549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.131153 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.131419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.131447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.131774 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.236019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.236123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.236149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.236219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.236248 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.340250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.340351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.340374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.340407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.340468 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.444760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.444836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.444853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.444877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.444894 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.475566 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.475639 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:28 crc kubenswrapper[4756]: E1124 12:28:28.475721 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:28 crc kubenswrapper[4756]: E1124 12:28:28.475848 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.476109 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:28 crc kubenswrapper[4756]: E1124 12:28:28.476252 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.499409 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.512472 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.526630 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.540187 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.547245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.547281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.547291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.547308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.547321 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.556561 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.573730 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.590113 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.608430 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.623868 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.643105 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.650258 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.650538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.650661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.650806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.650898 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.659955 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.678486 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.694510 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.710807 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.741832 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.753590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.753649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.753661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.753683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.753696 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.763475 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.782662 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:28Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.856299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.856350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.856367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.856392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.856405 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.959637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.959702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.959714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.959732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:28 crc kubenswrapper[4756]: I1124 12:28:28.959747 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:28Z","lastTransitionTime":"2025-11-24T12:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.063908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.063949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.063961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.063981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.063994 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.200538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.200587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.200598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.200616 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.200628 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.304696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.304780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.304794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.304814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.305137 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.316830 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.317813 4756 scope.go:117] "RemoveContainer" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" Nov 24 12:28:29 crc kubenswrapper[4756]: E1124 12:28:29.318216 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.408939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.408993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.409005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.409025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.409038 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.475454 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:29 crc kubenswrapper[4756]: E1124 12:28:29.475575 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.512693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.512759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.512771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.512794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.512811 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.615858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.615915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.615929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.615951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.615969 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.718218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.718289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.718298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.718317 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.718328 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.820799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.820855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.820867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.820886 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.820898 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.927202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.927255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.927265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.927287 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:29 crc kubenswrapper[4756]: I1124 12:28:29.927300 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:29Z","lastTransitionTime":"2025-11-24T12:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.029611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.029645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.029670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.029685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.029694 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.132759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.133349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.133368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.133392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.133403 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.236658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.236701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.236712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.236732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.236746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.340569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.340641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.340655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.340677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.340693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.444339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.444390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.444408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.444429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.444443 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.475191 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.475334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.475334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:30 crc kubenswrapper[4756]: E1124 12:28:30.475514 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:30 crc kubenswrapper[4756]: E1124 12:28:30.475600 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:30 crc kubenswrapper[4756]: E1124 12:28:30.475666 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.547745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.547822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.547841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.547873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.547892 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.651259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.651650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.651846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.651985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.652128 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.756480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.756555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.756568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.756592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.756607 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.860295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.860379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.860397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.860424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.860442 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.963772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.964126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.964368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.964505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:30 crc kubenswrapper[4756]: I1124 12:28:30.964587 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:30Z","lastTransitionTime":"2025-11-24T12:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.067827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.068233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.068447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.068615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.068911 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.172505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.172577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.172590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.172615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.172632 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.275368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.275410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.275420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.275441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.275452 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.338602 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:31 crc kubenswrapper[4756]: E1124 12:28:31.338868 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:31 crc kubenswrapper[4756]: E1124 12:28:31.339339 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:39.339313492 +0000 UTC m=+51.696827634 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.377949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.378013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.378032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.378057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.378075 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.474615 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:31 crc kubenswrapper[4756]: E1124 12:28:31.474819 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.480848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.480908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.480927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.480952 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.480971 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.583688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.583766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.583779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.583800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.583813 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.686700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.687172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.687243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.687348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.687470 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.791143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.791445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.791532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.791626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.791711 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.894819 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.895138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.895259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.895349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.895442 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.998222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.998548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.998635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.998732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:31 crc kubenswrapper[4756]: I1124 12:28:31.998835 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:31Z","lastTransitionTime":"2025-11-24T12:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.101741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.101815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.101834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.101868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.101895 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.205010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.205099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.205114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.205139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.205176 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.308119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.308248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.308314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.308372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.308400 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.411308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.411373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.411394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.411432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.411457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.475657 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.475792 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:32 crc kubenswrapper[4756]: E1124 12:28:32.475861 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:32 crc kubenswrapper[4756]: E1124 12:28:32.476185 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.476383 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:32 crc kubenswrapper[4756]: E1124 12:28:32.476812 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.516404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.516469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.516483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.516513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.516531 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.619571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.619630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.619640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.619662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.619676 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.722877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.723173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.723245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.723325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.723383 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.825883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.825931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.825942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.825963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.825975 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.929584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.929639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.929651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.929670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:32 crc kubenswrapper[4756]: I1124 12:28:32.929686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:32Z","lastTransitionTime":"2025-11-24T12:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.032522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.032583 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.032594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.032611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.032621 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.135724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.135768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.135778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.135796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.135806 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.238957 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.239010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.239019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.239037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.239049 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.342723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.342783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.342793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.342815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.342826 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.446237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.446296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.446309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.446333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.446347 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.475354 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:33 crc kubenswrapper[4756]: E1124 12:28:33.475922 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.549943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.550026 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.550040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.550060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.550073 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.653665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.653709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.653720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.653747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.653764 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.756614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.756661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.756673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.756694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.756707 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.860147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.860213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.860226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.860265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.860280 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.963611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.963661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.963674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.963696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:33 crc kubenswrapper[4756]: I1124 12:28:33.963709 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:33Z","lastTransitionTime":"2025-11-24T12:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.067447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.067501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.067512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.067529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.067539 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.171206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.171283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.171319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.171352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.171379 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.275650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.275758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.275789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.275824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.275851 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.325116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.325194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.325206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.325225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.325235 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.343811 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.349721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.349792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.349818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.349853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.349881 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.375341 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.381956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.382019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.382039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.382068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.382090 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.406260 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.411883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.411948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.411962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.411992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.412006 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.433342 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.440713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.440791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.440815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.440852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.440871 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.458574 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:34Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.458962 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.461710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.461787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.461803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.461829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.461846 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.476588 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.476660 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.476744 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.476903 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.477331 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:34 crc kubenswrapper[4756]: E1124 12:28:34.477446 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.571821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.571897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.571912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.571932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.571972 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.675490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.675552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.675569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.675592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.675608 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.779821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.779911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.779927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.779949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.779961 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.883758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.883817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.883828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.883848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.883862 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.987080 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.987190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.987218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.987250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:34 crc kubenswrapper[4756]: I1124 12:28:34.987272 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:34Z","lastTransitionTime":"2025-11-24T12:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.090049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.090105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.090126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.090187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.090211 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.194479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.194594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.194612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.194641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.194679 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.297770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.297833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.297843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.297864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.297879 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.401271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.401324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.401335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.401355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.401367 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.475320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:35 crc kubenswrapper[4756]: E1124 12:28:35.475464 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.504283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.504332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.504341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.504356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.504370 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.607413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.607472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.607482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.607499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.607512 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.710145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.710216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.710228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.710246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.710259 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.813052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.813130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.813145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.813190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.813206 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.916113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.916193 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.916212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.916240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:35 crc kubenswrapper[4756]: I1124 12:28:35.916275 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:35Z","lastTransitionTime":"2025-11-24T12:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.019554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.019602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.019618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.019638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.019652 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.123066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.123137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.123216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.123251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.123273 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.227231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.227316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.227337 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.227368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.227389 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.330079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.330130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.330142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.330183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.330197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.433837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.433909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.433933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.433963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.433983 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.475483 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.475638 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.475483 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:36 crc kubenswrapper[4756]: E1124 12:28:36.475729 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:36 crc kubenswrapper[4756]: E1124 12:28:36.475821 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:36 crc kubenswrapper[4756]: E1124 12:28:36.475970 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.537101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.537195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.537214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.537236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.537254 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.546613 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.559492 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.572208 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.590621 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.609307 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.631352 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.642079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.642144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.642194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.642229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.642249 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.671087 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.688288 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.710328 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.730668 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.745906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.745955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.745972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.745996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.746011 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.750106 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.769812 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.789438 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.807396 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.824801 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.845387 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.848644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.848667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.848676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.848690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.848703 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.860473 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.874409 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.889207 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:36Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.952809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.952906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.952936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.952971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:36 crc kubenswrapper[4756]: I1124 12:28:36.952998 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:36Z","lastTransitionTime":"2025-11-24T12:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.056439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.056549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.056575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.056607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.056630 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.159602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.159675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.159695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.159719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.159736 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.262071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.262123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.262133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.262152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.262181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.365345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.365402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.365414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.365437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.365452 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.469476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.469545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.469564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.469589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.469607 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.474984 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:37 crc kubenswrapper[4756]: E1124 12:28:37.475341 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.572503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.572601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.572614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.572640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.572658 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.675532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.675581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.675593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.675611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.675624 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.779498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.779557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.779571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.779591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.779604 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.882530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.882627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.882649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.882681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.882704 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.986005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.986112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.986143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.986335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:37 crc kubenswrapper[4756]: I1124 12:28:37.986365 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:37Z","lastTransitionTime":"2025-11-24T12:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.089654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.090066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.090137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.090247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.090326 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.193649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.194213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.194370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.194574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.194772 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.298266 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.298387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.298410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.298437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.298457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.401232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.401301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.401318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.401347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.401370 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.475017 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.475093 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.475116 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:38 crc kubenswrapper[4756]: E1124 12:28:38.475252 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:38 crc kubenswrapper[4756]: E1124 12:28:38.475337 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:38 crc kubenswrapper[4756]: E1124 12:28:38.475459 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.498294 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.504090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.504130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.504141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.504200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.504215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.515233 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.533217 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.553380 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.573104 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.602488 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.609842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.609904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.609920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.609946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.609961 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.622721 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.641573 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.657245 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.674093 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.690857 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.708300 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.713623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.713678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.713694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.713728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.713751 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.723674 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.741006 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.762992 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.777474 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.799335 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.811660 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:38Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.815906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.815945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.815956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.815975 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.815986 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.918953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.919020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.919032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.919053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:38 crc kubenswrapper[4756]: I1124 12:28:38.919068 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:38Z","lastTransitionTime":"2025-11-24T12:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.022389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.022451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.022461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.022526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.022550 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.125449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.125501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.125512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.125533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.125544 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.228277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.228370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.228391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.228426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.228446 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.331460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.331536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.331554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.331582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.331602 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:39 crc kubenswrapper[4756]: E1124 12:28:39.434610 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:39 crc kubenswrapper[4756]: E1124 12:28:39.434650 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:28:55.434635929 +0000 UTC m=+67.792150071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.434902 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.475293 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:39 crc kubenswrapper[4756]: E1124 12:28:39.475442 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.537784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.537837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.537850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.537875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.537889 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.640320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.640389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.640409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.640435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.640464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.743321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.743377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.743393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.743418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.743435 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.846635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.846706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.846723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.846756 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.846775 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.950273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.950335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.950353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.950384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:39 crc kubenswrapper[4756]: I1124 12:28:39.950402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:39Z","lastTransitionTime":"2025-11-24T12:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.052848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.052885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.052894 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.052909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.052918 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.155338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.155387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.155397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.155410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.155419 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.244095 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.244269 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.244308 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.244329 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244404 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:12.244367678 +0000 UTC m=+84.601881860 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244437 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244446 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.244473 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244505 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244560 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244580 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244617 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244536 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:29:12.244513992 +0000 UTC m=+84.602028164 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244686 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:29:12.244666056 +0000 UTC m=+84.602180238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244456 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244715 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244718 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:29:12.244701667 +0000 UTC m=+84.602215859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.244769 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:29:12.244744278 +0000 UTC m=+84.602258450 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.258878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.258944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.258961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.258989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.259008 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.362574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.362646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.362670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.362748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.362765 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.465418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.465459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.465469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.465486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.465496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.475041 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.475171 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.475046 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.475253 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.475309 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:40 crc kubenswrapper[4756]: E1124 12:28:40.475380 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.568934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.568988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.569001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.569023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.569035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.673272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.673350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.673367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.673395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.673420 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.777297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.777355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.777368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.777389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.777403 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.880724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.880798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.880821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.880845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.880861 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.985118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.985236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.985260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.985299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:40 crc kubenswrapper[4756]: I1124 12:28:40.985321 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:40Z","lastTransitionTime":"2025-11-24T12:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.089262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.089330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.089343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.089367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.089382 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.192299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.192362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.192377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.192401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.192415 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.295897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.295947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.295961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.295983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.295996 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.399427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.399477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.399490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.399511 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.399524 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.474555 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:41 crc kubenswrapper[4756]: E1124 12:28:41.475772 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.475983 4756 scope.go:117] "RemoveContainer" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.502974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.503021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.503033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.503100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.503116 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.605839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.606406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.606423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.606445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.606460 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.709584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.709640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.709658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.709679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.709693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.812408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.812434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.812442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.812455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.812465 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.915277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.915389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.915402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.915424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.915437 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:41Z","lastTransitionTime":"2025-11-24T12:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.920925 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/1.log" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.923839 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.924296 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.941839 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:41Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.958346 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:41Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:41 crc kubenswrapper[4756]: I1124 12:28:41.977620 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:41Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.004888 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.018223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.018268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.018291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.018307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.018320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.033248 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.049856 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.066974 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.081886 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.095627 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.117679 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.120352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.120397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.120410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.120444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.120456 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.133478 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.154006 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.170475 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.183952 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.198253 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.212109 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.223778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.223854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.223917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.223950 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.224010 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.228810 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.243051 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.327033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.327081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.327094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.327113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.327126 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.429816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.429878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.429895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.429921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.429942 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.475303 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.475405 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.475455 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:42 crc kubenswrapper[4756]: E1124 12:28:42.475498 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:42 crc kubenswrapper[4756]: E1124 12:28:42.475609 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:42 crc kubenswrapper[4756]: E1124 12:28:42.475832 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.533744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.533811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.533827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.533852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.533872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.636708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.636742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.636754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.636770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.636782 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.740338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.740405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.740415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.740436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.740452 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.842663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.842694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.842702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.842714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.842722 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.929942 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/2.log" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.930714 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/1.log" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.933963 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" exitCode=1 Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.934026 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.934093 4756 scope.go:117] "RemoveContainer" containerID="d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.935228 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:28:42 crc kubenswrapper[4756]: E1124 12:28:42.935474 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.945772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.945826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.945840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.945860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.945874 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:42Z","lastTransitionTime":"2025-11-24T12:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.951597 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.974120 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:42 crc kubenswrapper[4756]: I1124 12:28:42.990134 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.011618 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.034819 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.050910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.050998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.051027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.051060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.051083 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.061563 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.087650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.101473 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.119629 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.150807 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.153313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.153353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.153364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.153383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.153394 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.166358 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.188339 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9935330d446896479e1c9ac977914f4ffd2882076fdab204300b6fe8585f498\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:20Z\\\",\\\"message\\\":\\\"28:20.739840 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.739507 6167 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 12:28:20.740197 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 12:28:20.742005 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 12:28:20.742043 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 12:28:20.743667 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 12:28:20.743776 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 12:28:20.743789 6167 factory.go:656] Stopping watch factory\\\\nI1124 12:28:20.772234 6167 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 12:28:20.772257 6167 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 12:28:20.772300 6167 ovnkube.go:599] Stopped ovnkube\\\\nI1124 12:28:20.772333 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 12:28:20.772412 6167 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.202050 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.219108 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.234614 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.252498 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.257264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.257328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.257338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.257360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.257376 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.270086 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.287025 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.360518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.360553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.360562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.360577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.360586 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.463441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.463485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.463495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.463510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.463522 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.474887 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:43 crc kubenswrapper[4756]: E1124 12:28:43.475035 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.566380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.566480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.566496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.566526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.566545 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.669624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.669659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.669669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.669683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.669693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.777388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.777477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.777493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.777518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.777534 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.881465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.881853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.882000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.882190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.882401 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.940443 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/2.log" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.946016 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:28:43 crc kubenswrapper[4756]: E1124 12:28:43.946344 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.969088 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.987910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.988012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.988025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.988043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.988057 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:43Z","lastTransitionTime":"2025-11-24T12:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:43 crc kubenswrapper[4756]: I1124 12:28:43.988096 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:43Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.004809 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.018750 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.032743 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.049553 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.065352 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.082484 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.090814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.090851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.090860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.090875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.090883 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.095331 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.106517 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.118987 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.133145 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.142828 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.157341 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.176492 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193715 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.193993 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.214271 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.226868 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.296940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.296990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.297001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.297021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.297033 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.401870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.401903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.401912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.401926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.401935 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.475715 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.475849 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.476041 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.476095 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.476269 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.476324 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.504946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.505034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.505051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.505072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.505089 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.512562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.512622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.512637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.512657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.512672 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.535650 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.542386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.542493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.542516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.542549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.542570 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.563179 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.568148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.568242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.568257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.568281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.568296 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.584204 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.590755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.590810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.590826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.590847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.590860 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.604292 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.609809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.609893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.609909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.609924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.609937 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.624441 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:44Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:44 crc kubenswrapper[4756]: E1124 12:28:44.624588 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.626104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.626175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.626199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.626221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.626237 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.730289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.730360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.730378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.730449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.730464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.834121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.834261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.834289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.834321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.834345 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.937887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.937933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.937947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.937966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:44 crc kubenswrapper[4756]: I1124 12:28:44.937982 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:44Z","lastTransitionTime":"2025-11-24T12:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.041797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.041844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.041861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.041884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.041900 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.144433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.144500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.144524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.144549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.144569 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.247406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.247543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.247621 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.247656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.247731 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.350986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.351053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.351074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.351102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.351123 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.454094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.454489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.454554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.454965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.455030 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.474967 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:45 crc kubenswrapper[4756]: E1124 12:28:45.475072 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.558945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.559003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.559021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.559046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.559063 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.662019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.662079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.662094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.662117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.662131 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.764739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.764801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.764811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.764830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.764841 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.868092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.868178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.868189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.868213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.868226 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.971209 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.971297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.971322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.971355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:45 crc kubenswrapper[4756]: I1124 12:28:45.971374 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:45Z","lastTransitionTime":"2025-11-24T12:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.073667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.073749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.073780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.073812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.073843 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.179785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.179854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.179876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.179904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.179926 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.283247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.283575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.283716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.283849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.284020 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.387490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.387553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.387575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.387604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.387626 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.474855 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:46 crc kubenswrapper[4756]: E1124 12:28:46.475008 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.475005 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:46 crc kubenswrapper[4756]: E1124 12:28:46.475262 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.475367 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:46 crc kubenswrapper[4756]: E1124 12:28:46.475444 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.489955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.489984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.489992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.490004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.490013 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.592219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.592259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.592274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.592293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.592305 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.694646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.694809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.694828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.694852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.694871 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.798186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.798236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.798248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.798265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.798277 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.901270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.901346 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.901368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.901395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:46 crc kubenswrapper[4756]: I1124 12:28:46.901410 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:46Z","lastTransitionTime":"2025-11-24T12:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.003866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.003939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.003964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.003992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.004014 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.107926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.107991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.108012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.108039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.108059 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.210875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.210927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.210942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.210961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.210974 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.314739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.314820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.314843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.314873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.314896 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.418357 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.418417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.418434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.418459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.418477 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.475222 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:47 crc kubenswrapper[4756]: E1124 12:28:47.475461 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.521665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.521701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.521713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.521731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.521744 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.625014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.625059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.625092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.625109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.625118 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.728443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.728538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.728556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.728579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.728594 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.831998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.832079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.832099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.832128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.832147 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.935202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.935255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.935269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.935291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:47 crc kubenswrapper[4756]: I1124 12:28:47.935309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:47Z","lastTransitionTime":"2025-11-24T12:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.038212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.038285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.038298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.038325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.038340 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.140607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.140830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.140893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.140989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.141052 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.243754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.244144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.244417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.244623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.244772 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.348550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.348609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.348630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.348654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.348671 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.451659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.451764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.451794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.451841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.451866 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.474669 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.474718 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.474691 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:48 crc kubenswrapper[4756]: E1124 12:28:48.474905 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:48 crc kubenswrapper[4756]: E1124 12:28:48.475116 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:48 crc kubenswrapper[4756]: E1124 12:28:48.475417 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.492782 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.525440 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.542857 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.554726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.554841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.554902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.554968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.555048 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.566841 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.591691 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.612713 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.629574 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.643890 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.657018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.657058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.657072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.657094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.657108 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.661254 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.673678 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.689189 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.710340 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.724947 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.735486 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.747272 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.757573 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.759053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.759094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.759106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.759125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.759138 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.771261 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.781781 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:48Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.862292 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.862362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.862386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.862419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.862472 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.965607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.965675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.965692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.965718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:48 crc kubenswrapper[4756]: I1124 12:28:48.965735 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:48Z","lastTransitionTime":"2025-11-24T12:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.068707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.068787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.068811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.068840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.068930 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.171501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.171825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.171845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.171869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.171886 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.275847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.275929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.275942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.275962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.275974 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.378753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.378810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.378824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.378842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.378854 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.475259 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:49 crc kubenswrapper[4756]: E1124 12:28:49.475685 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.482541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.482603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.482622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.482652 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.482670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.584875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.584933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.584952 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.584977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.584997 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.687694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.687727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.687736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.687751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.687760 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.790831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.791107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.791191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.791295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.791364 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.894822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.895180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.895309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.895413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.895617 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.998646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.998698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.998712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.998733 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:49 crc kubenswrapper[4756]: I1124 12:28:49.998748 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:49Z","lastTransitionTime":"2025-11-24T12:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.101296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.101349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.101362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.101381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.101392 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.203551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.203591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.203602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.203620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.203631 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.306299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.306351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.306362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.306378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.306389 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.409046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.409121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.409144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.409212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.409241 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.475311 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:50 crc kubenswrapper[4756]: E1124 12:28:50.475522 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.475556 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.475344 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:50 crc kubenswrapper[4756]: E1124 12:28:50.475927 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:50 crc kubenswrapper[4756]: E1124 12:28:50.475771 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.511488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.511658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.511672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.511689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.511731 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.615006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.615101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.615123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.615228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.615258 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.718605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.718676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.718695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.718719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.718736 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.821412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.821449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.821458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.821474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.821483 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.923988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.924326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.924451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.924573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:50 crc kubenswrapper[4756]: I1124 12:28:50.924670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:50Z","lastTransitionTime":"2025-11-24T12:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.027774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.028538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.028672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.028825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.028954 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.131040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.131072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.131081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.131096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.131106 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.234145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.234295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.234324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.234356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.234382 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.344415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.344493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.344529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.344565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.344595 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.466348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.466388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.466399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.466413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.466423 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.479191 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:51 crc kubenswrapper[4756]: E1124 12:28:51.480028 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.568791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.569055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.569131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.569261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.569345 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.674051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.674361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.674446 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.674532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.674616 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.777249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.777304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.777316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.777331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.777342 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.880306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.880651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.880920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.881130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.881305 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.984714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.984761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.984776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.984797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:51 crc kubenswrapper[4756]: I1124 12:28:51.984812 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:51Z","lastTransitionTime":"2025-11-24T12:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.087302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.087365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.087377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.087393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.087402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.189972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.190006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.190018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.190034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.190046 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.292310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.292353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.292365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.292382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.292393 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.394609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.394858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.394972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.395071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.395174 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.474535 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.474535 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.474594 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:52 crc kubenswrapper[4756]: E1124 12:28:52.474748 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:52 crc kubenswrapper[4756]: E1124 12:28:52.474970 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:52 crc kubenswrapper[4756]: E1124 12:28:52.475044 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.496867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.496946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.496958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.496996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.497006 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.599148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.599250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.599270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.599298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.599315 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.701144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.701195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.701205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.701220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.701233 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.803806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.804059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.804174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.804250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.804329 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.906402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.906704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.906797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.906870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:52 crc kubenswrapper[4756]: I1124 12:28:52.906948 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:52Z","lastTransitionTime":"2025-11-24T12:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.010046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.010388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.010473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.010537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.010611 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.113298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.113327 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.113335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.113349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.113359 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.216249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.216310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.216325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.216345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.216358 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.320022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.320083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.320107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.320137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.320189 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.423994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.424036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.424045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.424064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.424073 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.474817 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:53 crc kubenswrapper[4756]: E1124 12:28:53.475071 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.526933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.526983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.526995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.527011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.527024 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.629734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.629795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.629808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.629829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.629841 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.733052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.733109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.733123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.733150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.733202 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.835716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.835745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.835753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.835787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.835797 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.938718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.938849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.938875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.938903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:53 crc kubenswrapper[4756]: I1124 12:28:53.938963 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:53Z","lastTransitionTime":"2025-11-24T12:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.041705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.041763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.041837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.041855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.041866 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.144418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.144481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.144505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.144534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.144552 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.247305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.247363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.247374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.247436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.247447 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.350114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.350178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.350187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.350200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.350211 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.452691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.452929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.453035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.453122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.453223 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.475810 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:54 crc kubenswrapper[4756]: E1124 12:28:54.476070 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.476413 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.476489 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:54 crc kubenswrapper[4756]: E1124 12:28:54.476626 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:54 crc kubenswrapper[4756]: E1124 12:28:54.476756 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.555605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.555696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.555710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.555729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.555739 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.658665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.658713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.658723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.658741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.658753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.761567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.761607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.761624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.761648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.761664 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.865282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.865322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.865332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.865347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.865356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.967680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.967719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.967729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.967742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.967750 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.988572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.988642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.988657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.988680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:54 crc kubenswrapper[4756]: I1124 12:28:54.988693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:54Z","lastTransitionTime":"2025-11-24T12:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.007114 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:55Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.011473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.011500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.011509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.011522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.011532 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.030739 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:55Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.035574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.035615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.035625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.035640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.035651 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.052923 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:55Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.056676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.056920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.056929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.056942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.056952 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.070048 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:55Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.074219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.074253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.074267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.074285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.074297 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.090456 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:55Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.090704 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.092291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.092332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.092348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.092369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.092387 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.194460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.194488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.194495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.194507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.194514 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.296396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.296507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.296530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.296552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.296603 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.420931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.421296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.421386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.421469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.421547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.474797 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.475017 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.524047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.524125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.524134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.524169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.524181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.531745 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.532052 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:55 crc kubenswrapper[4756]: E1124 12:28:55.532248 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs podName:6662f3ec-8806-4797-a7a5-f1606c4a54cf nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.532209307 +0000 UTC m=+99.889723619 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs") pod "network-metrics-daemon-r955c" (UID: "6662f3ec-8806-4797-a7a5-f1606c4a54cf") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.627794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.627902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.627923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.627951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.627971 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.731426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.731484 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.731498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.731541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.731553 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.834281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.834329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.834368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.834386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.834396 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.938322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.938786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.938883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.939032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:55 crc kubenswrapper[4756]: I1124 12:28:55.939114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:55Z","lastTransitionTime":"2025-11-24T12:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.043424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.044016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.044306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.044586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.044776 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.147917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.147959 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.147967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.147983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.147995 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.250081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.250730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.250813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.250909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.250986 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.354111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.354173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.354184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.354203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.354213 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.456607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.456656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.456668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.456685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.456741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.475104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.475129 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:56 crc kubenswrapper[4756]: E1124 12:28:56.475445 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:56 crc kubenswrapper[4756]: E1124 12:28:56.475456 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.475260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:56 crc kubenswrapper[4756]: E1124 12:28:56.475757 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.558805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.558838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.558848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.558862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.558872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.660843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.660888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.660899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.660913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.660924 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.762643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.762691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.762702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.762719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.762730 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.865182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.865236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.865247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.865262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.865272 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.967477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.967516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.967525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.967540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.967550 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:56Z","lastTransitionTime":"2025-11-24T12:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.985998 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/0.log" Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.986049 4756 generic.go:334] "Generic (PLEG): container finished" podID="077d4abb-b72e-499f-98c2-628720d701dc" containerID="e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf" exitCode=1 Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.986084 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerDied","Data":"e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf"} Nov 24 12:28:56 crc kubenswrapper[4756]: I1124 12:28:56.986507 4756 scope.go:117] "RemoveContainer" containerID="e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.000001 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:56Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.013868 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.028243 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.042309 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.055566 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.067867 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.069069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.069090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.069097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.069110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.069118 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.080448 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.090460 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.102028 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.117241 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.130739 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.147191 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.168640 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.173042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.173275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.173423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.173536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.173630 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.188525 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.211126 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.226371 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"2025-11-24T12:28:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c\\\\n2025-11-24T12:28:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c to /host/opt/cni/bin/\\\\n2025-11-24T12:28:11Z [verbose] multus-daemon started\\\\n2025-11-24T12:28:11Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:28:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.244086 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.253374 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:57Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.276515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.276561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.276591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.276611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.276623 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.378792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.378852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.378867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.378884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.378895 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.474912 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:57 crc kubenswrapper[4756]: E1124 12:28:57.475060 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.481725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.481758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.481769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.481783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.481794 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.583908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.583966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.583996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.584016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.584027 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.686951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.686990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.686999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.687013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.687023 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.789115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.789151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.789184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.789200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.789211 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.891059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.891122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.891147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.891271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.891296 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.990715 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/0.log" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.990776 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerStarted","Data":"a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17"} Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.993436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.993468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.993478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.993490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:57 crc kubenswrapper[4756]: I1124 12:28:57.993500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:57Z","lastTransitionTime":"2025-11-24T12:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.006894 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.021377 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.033305 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.043145 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.054126 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.065152 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.080128 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.089369 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.095132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.095184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.095197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.095215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.095226 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.101402 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.122896 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.135872 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"2025-11-24T12:28:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c\\\\n2025-11-24T12:28:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c to /host/opt/cni/bin/\\\\n2025-11-24T12:28:11Z [verbose] multus-daemon started\\\\n2025-11-24T12:28:11Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:28:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.160077 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.173423 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.185599 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.197505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.197589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.197613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.197644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.197670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.202776 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.220043 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.234920 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.247670 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.300990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.301069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.301083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.301110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.301129 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.410287 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.410353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.410365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.410387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.410400 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.475196 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.475260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:28:58 crc kubenswrapper[4756]: E1124 12:28:58.475438 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:28:58 crc kubenswrapper[4756]: E1124 12:28:58.475561 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.476107 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.476339 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:28:58 crc kubenswrapper[4756]: E1124 12:28:58.476536 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" Nov 24 12:28:58 crc kubenswrapper[4756]: E1124 12:28:58.478298 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.494292 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.512788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.512828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.512842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.512860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.512874 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.515419 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.533921 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"2025-11-24T12:28:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c\\\\n2025-11-24T12:28:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c to /host/opt/cni/bin/\\\\n2025-11-24T12:28:11Z [verbose] multus-daemon started\\\\n2025-11-24T12:28:11Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:28:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.558008 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.572743 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.587295 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.600053 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.615140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.615218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.615230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.615249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.615260 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.617963 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.630240 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.642134 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.659957 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.674097 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.684981 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.695553 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.708180 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.717489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.717533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.717547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.717565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.717577 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.720597 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.734577 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.745980 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:58Z is after 2025-08-24T17:21:41Z" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.820088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.820176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.820191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.820208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.820222 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.922805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.922838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.922848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.922864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:58 crc kubenswrapper[4756]: I1124 12:28:58.922874 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:58Z","lastTransitionTime":"2025-11-24T12:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.024784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.024817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.024827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.024839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.024850 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.131510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.131555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.131564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.131578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.131594 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.234224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.234465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.234545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.234659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.234724 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.336973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.337012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.337022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.337037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.337048 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.439552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.439620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.439633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.439672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.439686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.474548 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:28:59 crc kubenswrapper[4756]: E1124 12:28:59.474674 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.542234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.542260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.542269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.542282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.542291 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.645056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.645103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.645115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.645134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.645147 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.747486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.747546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.747557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.747574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.747583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.849779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.849812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.849821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.849835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.849846 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.953763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.953795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.953804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.953818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:28:59 crc kubenswrapper[4756]: I1124 12:28:59.953826 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:28:59Z","lastTransitionTime":"2025-11-24T12:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.056140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.056193 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.056202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.056214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.056223 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.157907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.158109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.158186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.158249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.158304 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.260253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.260505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.260572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.260648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.260709 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.363194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.363253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.363270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.363298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.363316 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.465920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.465955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.465963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.465979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.465989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.475393 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.475509 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.475405 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:00 crc kubenswrapper[4756]: E1124 12:29:00.475598 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:00 crc kubenswrapper[4756]: E1124 12:29:00.475511 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:00 crc kubenswrapper[4756]: E1124 12:29:00.475687 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.568138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.568213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.568225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.568278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.568288 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.670836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.671114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.671241 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.671335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.671416 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.773340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.773585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.773646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.773708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.773763 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.876174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.876203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.876217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.876234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.876243 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.978918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.979359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.979497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.979639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:00 crc kubenswrapper[4756]: I1124 12:29:00.979821 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:00Z","lastTransitionTime":"2025-11-24T12:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.083425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.083495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.083515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.083540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.083557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.186370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.186417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.186432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.186450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.186462 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.289455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.289542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.289567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.289597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.289620 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.391724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.391768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.391781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.391801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.391816 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.475051 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:01 crc kubenswrapper[4756]: E1124 12:29:01.475245 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.493810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.493876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.493892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.493924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.493943 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.596647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.596820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.596847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.596877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.596905 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.699367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.699434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.699451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.699476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.699495 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.801876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.801914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.801921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.801936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.801944 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.905263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.905345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.905372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.905406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:01 crc kubenswrapper[4756]: I1124 12:29:01.905430 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:01Z","lastTransitionTime":"2025-11-24T12:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.008533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.008590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.008602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.008618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.008629 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.110861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.110923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.110935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.110949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.110959 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.214514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.214566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.214590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.214619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.214642 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.332696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.332762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.332782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.332804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.332820 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.436102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.436212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.436239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.436272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.436300 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.474966 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:02 crc kubenswrapper[4756]: E1124 12:29:02.475120 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.475442 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:02 crc kubenswrapper[4756]: E1124 12:29:02.475538 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.475780 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:02 crc kubenswrapper[4756]: E1124 12:29:02.475860 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.540006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.540060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.540080 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.540105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.540124 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.643002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.643053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.643069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.643092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.643110 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.746020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.746085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.746107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.746130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.746148 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.850014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.850064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.850081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.850103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.850120 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.953003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.953071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.953097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.953191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:02 crc kubenswrapper[4756]: I1124 12:29:02.953218 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:02Z","lastTransitionTime":"2025-11-24T12:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.056147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.056288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.056308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.056332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.056351 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.159025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.159130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.159196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.159227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.159244 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.262929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.262997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.263015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.263044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.263062 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.365944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.366012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.366051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.366085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.366112 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.468940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.469029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.469061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.469128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.469204 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.474718 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:03 crc kubenswrapper[4756]: E1124 12:29:03.474921 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.572373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.572424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.572445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.572475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.572496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.678583 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.678646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.678666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.678689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.678706 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.781951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.782044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.782066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.782095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.782117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.885238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.885306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.885328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.885355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.885381 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.988041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.988102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.988113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.988132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:03 crc kubenswrapper[4756]: I1124 12:29:03.988195 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:03Z","lastTransitionTime":"2025-11-24T12:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.091650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.091721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.091741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.091766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.091788 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.194909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.194982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.194997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.195019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.195034 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.298003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.298061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.298078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.298100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.298117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.401423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.401495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.401513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.401542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.401567 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.475299 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:04 crc kubenswrapper[4756]: E1124 12:29:04.475461 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.475541 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:04 crc kubenswrapper[4756]: E1124 12:29:04.475712 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.476022 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:04 crc kubenswrapper[4756]: E1124 12:29:04.476581 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.503855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.503935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.503961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.503990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.504013 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.607198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.607321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.607342 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.607377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.607415 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.710404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.710479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.710503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.710533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.710554 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.814701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.814767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.814789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.814818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.814841 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.918613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.918920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.919121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.919386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:04 crc kubenswrapper[4756]: I1124 12:29:04.919493 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:04Z","lastTransitionTime":"2025-11-24T12:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.022561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.022908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.022997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.023092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.023233 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.125426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.125816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.125913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.126000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.126087 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.229762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.229832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.229850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.229877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.229895 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.332587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.332644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.332656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.332677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.332693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.389254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.389386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.389409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.389434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.389453 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.407707 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:05Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.411826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.411887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.411905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.411934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.411952 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.423430 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:05Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.427047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.427074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.427090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.427105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.427114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.446808 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:05Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.451751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.451826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.451845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.451877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.451898 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.466484 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:05Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.471062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.471105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.471120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.471141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.471179 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.474670 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.474884 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.502513 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T12:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a30f56e6-fd04-4fe7-a4af-c8a9fa3e621f\\\",\\\"systemUUID\\\":\\\"76b0c406-a550-4a16-95f4-45deb24662b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:05Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:05 crc kubenswrapper[4756]: E1124 12:29:05.502772 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.504921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.504969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.504988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.505016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.505035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.608441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.608515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.608534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.608559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.608576 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.711453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.711515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.711538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.711568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.711592 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.816065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.816144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.816205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.816243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.816267 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.920097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.920195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.920214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.920237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:05 crc kubenswrapper[4756]: I1124 12:29:05.920253 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:05Z","lastTransitionTime":"2025-11-24T12:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.022319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.022356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.022363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.022379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.022391 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.126032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.126092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.126111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.126137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.126184 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.229718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.229778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.229790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.229811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.229822 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.332690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.332767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.332787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.332814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.332834 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.438215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.438280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.438295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.438321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.438335 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.475341 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:06 crc kubenswrapper[4756]: E1124 12:29:06.475535 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.475366 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:06 crc kubenswrapper[4756]: E1124 12:29:06.475618 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.475344 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:06 crc kubenswrapper[4756]: E1124 12:29:06.475661 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.546987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.547081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.547098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.547126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.547149 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.652011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.652078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.652103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.652135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.652191 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.755388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.755459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.755477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.755504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.755522 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.858409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.858515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.858536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.858569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.858594 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.961889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.961963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.961988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.962023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:06 crc kubenswrapper[4756]: I1124 12:29:06.962045 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:06Z","lastTransitionTime":"2025-11-24T12:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.065512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.065598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.065618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.065654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.065676 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.168416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.168507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.168526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.168552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.168570 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.271499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.271570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.271584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.271607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.271623 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.374452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.374508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.374523 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.374545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.374562 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.474823 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:07 crc kubenswrapper[4756]: E1124 12:29:07.475079 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.477289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.477334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.477347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.477365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.477375 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.580943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.581007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.581020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.581040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.581054 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.684882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.684942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.684958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.684984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.685001 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.788358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.788454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.788480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.788515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.788539 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.891685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.891734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.891750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.891796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.891812 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.994386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.994438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.994450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.994469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:07 crc kubenswrapper[4756]: I1124 12:29:07.994481 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:07Z","lastTransitionTime":"2025-11-24T12:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.097679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.097752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.097772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.097798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.097819 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.199771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.199812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.199821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.199837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.199849 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.302064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.302131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.302148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.302239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.302265 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.405070 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.405122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.405136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.405172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.405184 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.474865 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.474962 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.475274 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:08 crc kubenswrapper[4756]: E1124 12:29:08.475256 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:08 crc kubenswrapper[4756]: E1124 12:29:08.475505 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:08 crc kubenswrapper[4756]: E1124 12:29:08.475599 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.494587 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.495194 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.508630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.508704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.508730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.508764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.508892 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.509239 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4692afa3618ecfccda6de61ef8b45b01ac3c73b5cd78add28119e0e5edfc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5c6393f77320770ee1c3cd8a053a9ebf819789d681ea10d8d94949c43eed8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.527806 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f22a5d1-b6e3-47e7-84de-f3d56e3eb50e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdbe0f1ffa308252d0343fbf9f29473555b938c7fa2853155248bf4b5c55b412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d66d7629213fbd1a1b4e6e6cd96af62ff40e3eade33c0581b2de87b4658cf4fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4e334407c4c83191638fbf3e3223b353be3e05dc9aa12b9ab2fc98779f3baa8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534ca36aa5e1c78e78d9d2bf3549f3c2ba738f5b11f6c8aa3a22697af80ef207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e87021866b829821108c53a3132adcecc866188d1d0975e0155c2ef6fe1a198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0f6d0aaf0ebfc60570e0d692575c83ae144e6486731115c3ec747e9c9066c22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://394b96e004afe1d12d76da0e7cbbf64a74308dded22776309361d2ed28fba720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8fpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqhbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.541346 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wbl2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf97ea-6f41-4eb9-9e2c-fadff2d40af0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213ebffb12ad9711379127a59f6ba3609c0cd52176caa0ef54fa12308db8d89b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmvvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wbl2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.555772 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49680242-2d7d-4591-97a5-6a13e5fc0cb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9205c28e1f32d71dd30b61a2b2df71cb148f53e2b119af807f70acd56fa1e8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47fc8af824259a3716e9d2705d5f65224260f5ed3320e44b34d37a2ba2f4dca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd51b4d4aec28f8525fdf78ca2d294427f4a87435400c81615d68c4bf2988ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8756f7a3c09312723a5ee66d25ff31b4add408f6531432c5633211e98726300\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.578083 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f28b3b0-e2ff-4547-a9f0-9175ee536a51\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5f742dae8707ddc845fe25addc3048b533fa57010ad55560710e9b43ee70ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf3d715992b9cbc96de3f9a224ae96d3a904555be015c12d8273b63ca643cc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a60f78647aaa9ef8ce5e4f4e91c73cb7fe3373b58ecabc64621252e4f72266\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1271e36d0b437667953b399be571f0921e78b3db8d122397a2acc2075b0428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18a719865c7e365b1f34f26c6062f12b769ee02b41e49cd8e0ee824d68e599c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4adf10c9d22fcfd8c8b1a8027ca077b0a957bc44b6d044e301a24f2e53dda9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://506e45f0089eef4981b175b9302e4d8295db75bc026faad0fcb9a82c9bef2a99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87d96ff2f4282d0aef0fc27f3aa0bfbb6fee4fe7bc5397621c74fab50e2326d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.596632 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-66bwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"077d4abb-b72e-499f-98c2-628720d701dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:56Z\\\",\\\"message\\\":\\\"2025-11-24T12:28:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c\\\\n2025-11-24T12:28:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3b33be24-f145-4303-8cbd-b8929dd0721c to /host/opt/cni/bin/\\\\n2025-11-24T12:28:11Z [verbose] multus-daemon started\\\\n2025-11-24T12:28:11Z [verbose] Readiness Indicator file check\\\\n2025-11-24T12:28:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x44lq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-66bwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.611534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.611608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.611620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.611669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.611686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.617393 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60bc5508-89b8-4cc3-a0d6-e30abed70f05\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T12:28:42Z\\\",\\\"message\\\":\\\"r-node configs for network=default: []services.lbConfig(nil)\\\\nF1124 12:28:42.295074 6424 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:28:42Z is after 2025-08-24T17:21:41Z]\\\\nI1124 12:28:42.296255 6424 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI1124 12:28:42.296274 6424 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hnsz7_openshift-ovn-kubernetes(60bc5508-89b8-4cc3-a0d6-e30abed70f05)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zw8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hnsz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.629868 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r955c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6662f3ec-8806-4797-a7a5-f1606c4a54cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvw8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r955c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.645240 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63997c8-1931-49c3-b405-46ac3c0f9810\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://749e0060132011f3a584ac4fe3ec178a08148063df5378ab12ec926cbea26163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac43adee09c1dd90e5c74efb0c70eb5583adc5054bfdb160623fff9cce1622d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d6e09fc0ba91fdc8237c4c2731eb19b07d810db28db2f7cc4c6b0c7efda9910\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81c8a73117c2a6358af241311e8423fd24a7741bc81272d09f0746f16024ecb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.661055 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.675529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.689380 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0f50ecd-811f-4df2-ae0c-83a787d6cbec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ae9c8eb9af6ed1f42a4b687a33f85c81f83245c6197dded1d90013b840fed0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmsdd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8p8dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.703855 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b70e3fcb-095c-48cb-8152-3a6a125d87e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a10a7ef9103f5a32111bf1e404f67677bec7567d9e43ef3afed78ab9c613ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204c180aed3e42b0712bdf4045ff2c33d8e872767dbd0d48b80a72de0bbaee46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pvcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f7x8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.714206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.714284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.714296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.714320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.714335 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.720057 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1294581a-0e4f-46f3-a360-16260d660b48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:27:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://451ad9fe986b57c4d0938ddd38a62bb3051a95c4d4750a77fc702df32da102f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4c290ae6320ead0ceeaef28db7eb2919be808d022a8dfb61396af365964e46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f82726a08a08656d421a491bdd1773de46e23e231e529fb8103fdd70832f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d23c4864dca17b8a917a553f4dbbe07e1f60d0b369889f176b4844e207c21ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f89cf93258b6b7e4a662722f2353b605dac464fd023868b6475ea6b9f398fb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T12:28:07Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1124 12:28:02.216650 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 12:28:02.218528 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1584717347/tls.crt::/tmp/serving-cert-1584717347/tls.key\\\\\\\"\\\\nI1124 12:28:07.636930 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 12:28:07.641586 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 12:28:07.641621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 12:28:07.641680 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 12:28:07.641698 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 12:28:07.652009 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 12:28:07.652057 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652064 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 12:28:07.652069 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 12:28:07.652075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 12:28:07.652077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 12:28:07.652080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 12:28:07.652082 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 12:28:07.656450 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0236a90addff753e92d9efbc5e4b533c79a13440c26d374d11260571abeafba9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:27:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d8647bdc833435a642898fbfe370159ca7e7d6e26d98b45ba21b54b8cf61b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T12:27:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T12:27:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:27:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.735422 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90c879175b3558714731286e433497d4ab79f80d89c9f5c426aebee7c00bcc5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.750663 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88da4961e0500e64b54df8b9896818e1f690423d7603b38bc446f454b3e9cbb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.763401 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h8ht2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8d145d-cbd0-41c5-9f2c-5c73f63e76b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T12:28:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317c53b0443327c624d62e7275d70cfdf428a2422b9acdee6f91aa104ef8579c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T12:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9lv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T12:28:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h8ht2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T12:29:08Z is after 2025-08-24T17:21:41Z" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.816339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.816394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.816409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.816430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.816443 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.920481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.920552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.920578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.920613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:08 crc kubenswrapper[4756]: I1124 12:29:08.920639 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:08Z","lastTransitionTime":"2025-11-24T12:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.023626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.023670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.023678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.023694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.023705 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.126250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.126315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.126329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.126347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.126364 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.229024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.229077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.229087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.229111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.229122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.331440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.331493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.331505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.331522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.331534 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.434266 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.434317 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.434326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.434342 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.434352 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.475225 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:09 crc kubenswrapper[4756]: E1124 12:29:09.475390 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.537682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.537751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.537773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.537797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.537814 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.640810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.640880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.640901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.640930 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.640950 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.743763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.743824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.743842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.743867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.743884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.847958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.848043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.848064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.848193 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.848257 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.952152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.952254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.952274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.952301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:09 crc kubenswrapper[4756]: I1124 12:29:09.952319 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:09Z","lastTransitionTime":"2025-11-24T12:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.055112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.055221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.055247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.055280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.055302 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.158531 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.158576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.158614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.158635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.158649 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.261601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.261648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.261658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.261673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.261682 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.364288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.364358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.364378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.364402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.364419 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.468320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.468394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.468411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.468440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.468458 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.475582 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.475681 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.475786 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:10 crc kubenswrapper[4756]: E1124 12:29:10.475837 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:10 crc kubenswrapper[4756]: E1124 12:29:10.475998 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:10 crc kubenswrapper[4756]: E1124 12:29:10.476088 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.572120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.572220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.572233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.572255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.572273 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.675458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.675527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.675551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.675585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.675610 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.778838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.778896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.778908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.778931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.778944 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.882191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.882264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.882280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.882313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.882339 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.985753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.985817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.985830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.985852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:10 crc kubenswrapper[4756]: I1124 12:29:10.985867 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:10Z","lastTransitionTime":"2025-11-24T12:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.089556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.089608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.089617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.089638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.089649 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.193021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.193074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.193086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.193104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.193116 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.296436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.296495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.296507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.296527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.296543 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.399415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.399455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.399468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.399485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.399497 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.475008 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:11 crc kubenswrapper[4756]: E1124 12:29:11.475183 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.502418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.502458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.502471 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.502490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.502501 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.606555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.606637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.606662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.606694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.606717 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.709409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.709460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.709470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.709487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.709498 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.812434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.812486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.812495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.812511 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.812524 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.915677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.915749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.915772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.915862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:11 crc kubenswrapper[4756]: I1124 12:29:11.915889 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:11Z","lastTransitionTime":"2025-11-24T12:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.019326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.019369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.019382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.019400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.019411 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.122347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.122461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.122490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.122517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.122538 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.225240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.225320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.225330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.225391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.225402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.244953 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.245082 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.245116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.245138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.245201 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245322 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245387 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245414 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245405 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245415 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245471 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245484 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245326 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:30:16.245257497 +0000 UTC m=+148.602771639 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245434 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245536 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:30:16.245513052 +0000 UTC m=+148.603027194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245559 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 12:30:16.245552943 +0000 UTC m=+148.603067085 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245593 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 12:30:16.245567483 +0000 UTC m=+148.603081625 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.245632 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 12:30:16.245625145 +0000 UTC m=+148.603139287 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.332191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.332234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.332245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.332265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.332278 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.435637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.435732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.435827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.435851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.435868 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.475682 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.475826 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.475932 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.476538 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.476951 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:12 crc kubenswrapper[4756]: E1124 12:29:12.477098 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.539257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.539349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.539378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.539409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.539433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.642700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.642823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.642847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.642892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.642920 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.745937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.746015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.746037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.746062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.746082 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.849210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.849260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.849277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.849302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.849320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.951603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.951693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.951710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.951734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:12 crc kubenswrapper[4756]: I1124 12:29:12.951753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:12Z","lastTransitionTime":"2025-11-24T12:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.053656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.053693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.053705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.053722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.053735 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.155667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.155718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.155729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.155746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.155757 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.258461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.258539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.258573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.258594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.258610 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.360797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.360865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.360883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.360904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.360916 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.463770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.463824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.463839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.463861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.463877 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.474672 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:13 crc kubenswrapper[4756]: E1124 12:29:13.474870 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.476347 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.567409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.568010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.568039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.568079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.568106 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.670612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.670657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.670668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.670688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.670697 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.773351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.773410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.773422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.773442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.773457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.875836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.875893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.875903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.875919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.875953 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.979390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.979435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.979445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.979461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:13 crc kubenswrapper[4756]: I1124 12:29:13.979472 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:13Z","lastTransitionTime":"2025-11-24T12:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.046892 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/2.log" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.050575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerStarted","Data":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.051100 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081393 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.081374769 podStartE2EDuration="6.081374769s" podCreationTimestamp="2025-11-24 12:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.080649374 +0000 UTC m=+86.438163536" watchObservedRunningTime="2025-11-24 12:29:14.081374769 +0000 UTC m=+86.438888911" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.081642 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.096231 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.096204612 podStartE2EDuration="1m5.096204612s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.095932346 +0000 UTC m=+86.453446508" watchObservedRunningTime="2025-11-24 12:29:14.096204612 +0000 UTC m=+86.453718744" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.155461 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podStartSLOduration=66.155434069 podStartE2EDuration="1m6.155434069s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.142303393 +0000 UTC m=+86.499817555" watchObservedRunningTime="2025-11-24 12:29:14.155434069 +0000 UTC m=+86.512948221" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.172559 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f7x8x" podStartSLOduration=65.17253719 podStartE2EDuration="1m5.17253719s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.155887149 +0000 UTC m=+86.513401301" watchObservedRunningTime="2025-11-24 12:29:14.17253719 +0000 UTC m=+86.530051342" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.185110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.185172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.185182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.185201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.185212 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.194466 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.194435611 podStartE2EDuration="1m6.194435611s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.173540151 +0000 UTC m=+86.531054303" watchObservedRunningTime="2025-11-24 12:29:14.194435611 +0000 UTC m=+86.551949773" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.250593 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-h8ht2" podStartSLOduration=66.250567484 podStartE2EDuration="1m6.250567484s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.229386088 +0000 UTC m=+86.586900230" watchObservedRunningTime="2025-11-24 12:29:14.250567484 +0000 UTC m=+86.608081636" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.286348 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bqhbk" podStartSLOduration=66.286320957 podStartE2EDuration="1m6.286320957s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.286150273 +0000 UTC m=+86.643664425" watchObservedRunningTime="2025-11-24 12:29:14.286320957 +0000 UTC m=+86.643835109" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.287177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.287216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.287225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.287239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.287250 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.299153 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wbl2t" podStartSLOduration=66.299132377 podStartE2EDuration="1m6.299132377s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.298734219 +0000 UTC m=+86.656248361" watchObservedRunningTime="2025-11-24 12:29:14.299132377 +0000 UTC m=+86.656646519" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.312603 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.31257957 podStartE2EDuration="38.31257957s" podCreationTimestamp="2025-11-24 12:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.311962287 +0000 UTC m=+86.669476429" watchObservedRunningTime="2025-11-24 12:29:14.31257957 +0000 UTC m=+86.670093712" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.338927 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.338907355 podStartE2EDuration="1m7.338907355s" podCreationTimestamp="2025-11-24 12:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.337416854 +0000 UTC m=+86.694931006" watchObservedRunningTime="2025-11-24 12:29:14.338907355 +0000 UTC m=+86.696421507" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.352545 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-66bwb" podStartSLOduration=66.352528222 podStartE2EDuration="1m6.352528222s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.351767866 +0000 UTC m=+86.709281998" watchObservedRunningTime="2025-11-24 12:29:14.352528222 +0000 UTC m=+86.710042364" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.376383 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podStartSLOduration=66.376359504 podStartE2EDuration="1m6.376359504s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:14.375056397 +0000 UTC m=+86.732570549" watchObservedRunningTime="2025-11-24 12:29:14.376359504 +0000 UTC m=+86.733873646" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.389419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.389473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.389488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.389506 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.389523 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.475594 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.475679 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.475727 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:14 crc kubenswrapper[4756]: E1124 12:29:14.475834 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:14 crc kubenswrapper[4756]: E1124 12:29:14.475961 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:14 crc kubenswrapper[4756]: E1124 12:29:14.476078 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.483668 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r955c"] Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.483980 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:14 crc kubenswrapper[4756]: E1124 12:29:14.484184 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.492390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.492424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.492436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.492453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.492464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.600558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.600755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.600768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.600787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.600803 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.703730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.703794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.703808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.703828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.703843 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.806386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.806428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.806436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.806451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.806459 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.908950 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.908989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.908999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.909019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:14 crc kubenswrapper[4756]: I1124 12:29:14.909029 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:14Z","lastTransitionTime":"2025-11-24T12:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.012083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.012128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.012138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.012178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.012193 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.114867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.114918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.114934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.114954 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.114970 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.217770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.217810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.217824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.217845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.217860 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.320574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.320635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.320648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.320668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.320686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.423454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.423497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.423511 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.423529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.423541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.525949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.525995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.526009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.526030 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.526066 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.527373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.527434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.527447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.527483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.527496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T12:29:15Z","lastTransitionTime":"2025-11-24T12:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.582499 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr"] Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.582934 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.585232 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.585308 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.586073 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.586138 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.683985 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.684098 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.684360 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5e967-efb9-4af0-8fc3-46772a7365ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.684444 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3a5e967-efb9-4af0-8fc3-46772a7365ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.684503 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5e967-efb9-4af0-8fc3-46772a7365ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786135 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5e967-efb9-4af0-8fc3-46772a7365ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786298 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786375 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5e967-efb9-4af0-8fc3-46772a7365ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786404 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3a5e967-efb9-4af0-8fc3-46772a7365ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786443 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.786552 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d3a5e967-efb9-4af0-8fc3-46772a7365ec-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.787866 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a5e967-efb9-4af0-8fc3-46772a7365ec-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.794951 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a5e967-efb9-4af0-8fc3-46772a7365ec-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.807372 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3a5e967-efb9-4af0-8fc3-46772a7365ec-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vblvr\" (UID: \"d3a5e967-efb9-4af0-8fc3-46772a7365ec\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: I1124 12:29:15.897655 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" Nov 24 12:29:15 crc kubenswrapper[4756]: W1124 12:29:15.915224 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a5e967_efb9_4af0_8fc3_46772a7365ec.slice/crio-18e2abe65ba46fb98d438b87ff1a6937ff3515427f022febbee0524c187d000c WatchSource:0}: Error finding container 18e2abe65ba46fb98d438b87ff1a6937ff3515427f022febbee0524c187d000c: Status 404 returned error can't find the container with id 18e2abe65ba46fb98d438b87ff1a6937ff3515427f022febbee0524c187d000c Nov 24 12:29:16 crc kubenswrapper[4756]: I1124 12:29:16.059448 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" event={"ID":"d3a5e967-efb9-4af0-8fc3-46772a7365ec","Type":"ContainerStarted","Data":"18e2abe65ba46fb98d438b87ff1a6937ff3515427f022febbee0524c187d000c"} Nov 24 12:29:16 crc kubenswrapper[4756]: I1124 12:29:16.475626 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:16 crc kubenswrapper[4756]: I1124 12:29:16.475750 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:16 crc kubenswrapper[4756]: I1124 12:29:16.475750 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:16 crc kubenswrapper[4756]: I1124 12:29:16.475823 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:16 crc kubenswrapper[4756]: E1124 12:29:16.475988 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 12:29:16 crc kubenswrapper[4756]: E1124 12:29:16.476147 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 12:29:16 crc kubenswrapper[4756]: E1124 12:29:16.476473 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 12:29:16 crc kubenswrapper[4756]: E1124 12:29:16.476674 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r955c" podUID="6662f3ec-8806-4797-a7a5-f1606c4a54cf" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.064269 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" event={"ID":"d3a5e967-efb9-4af0-8fc3-46772a7365ec","Type":"ContainerStarted","Data":"dd5a91561d7437783c9e487d2924001692c9c30de2b5367e3cbeb1b909c4c1b2"} Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.076085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.076255 4756 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.078748 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vblvr" podStartSLOduration=69.078728228 podStartE2EDuration="1m9.078728228s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:17.078073774 +0000 UTC m=+89.435587926" watchObservedRunningTime="2025-11-24 12:29:17.078728228 +0000 UTC m=+89.436242370" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.117750 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.118364 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.118762 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.119364 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.120673 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh4jr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.121107 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.121383 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.121863 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.122198 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.122692 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.122985 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.123526 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.124081 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.124506 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: W1124 12:29:17.126555 4756 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 24 12:29:17 crc kubenswrapper[4756]: E1124 12:29:17.126603 4756 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.126624 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.126684 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.126685 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: W1124 12:29:17.126782 4756 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.126836 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: E1124 12:29:17.126826 4756 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.126932 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.127088 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.132789 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tsnkp"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.146375 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.146616 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.146869 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.147964 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.148787 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.149303 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.149473 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.151321 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjd6w"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.160319 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.169896 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.170522 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.171387 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hz6cv"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.171942 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.173373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.174292 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.174659 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.174791 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.174914 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c698z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.174947 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.175435 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.175742 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176109 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176293 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176443 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176603 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176796 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.176954 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.177105 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.177264 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.177531 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.177706 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.177939 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178008 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178114 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178223 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178450 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178651 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178477 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178849 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.178863 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179056 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179174 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179467 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179653 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179757 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179771 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179846 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179926 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179967 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180050 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180138 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180052 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.179935 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180780 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180882 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.180884 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181006 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181122 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181242 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181313 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181245 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181424 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.181300 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.182156 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xlwmt"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.182773 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.183482 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh4jr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.184447 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.184602 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.184705 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.185112 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.185256 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.185418 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186521 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186540 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186640 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186682 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186780 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186823 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186891 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.186986 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.187023 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.187087 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.187568 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.189711 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.192678 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.188970 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.189062 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.189950 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.193342 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.193779 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.194146 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.194250 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.194325 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.194767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.195911 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.202616 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.203181 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.210883 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-auth-proxy-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.210939 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-encryption-config\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.210963 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.210984 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhcw8"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.210986 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.212008 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-client\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225280 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpfj\" (UniqueName: \"kubernetes.io/projected/d799ee04-948d-4c0b-8e84-f209d40380fc-kube-api-access-4vpfj\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225402 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86948975-86fc-40c1-970b-1fa5d7860497-machine-approver-tls\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225434 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225487 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-serving-cert\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225509 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225562 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225579 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-client\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225604 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225630 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-dir\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225667 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225780 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9feafd03-8082-4d57-9611-602776ad0db6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225799 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7s8\" (UniqueName: \"kubernetes.io/projected/d587d404-97ce-49d5-92f9-360d94d6d061-kube-api-access-vz7s8\") pod \"downloads-7954f5f757-hz6cv\" (UID: \"d587d404-97ce-49d5-92f9-360d94d6d061\") " pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225821 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-trusted-ca\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225919 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-config\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225941 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2939e314-f46f-468e-8890-4ac369fc7482-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225959 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.225998 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226019 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226068 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7dt\" (UniqueName: \"kubernetes.io/projected/7a28b73f-bc1c-41f3-a275-37dd4e44b507-kube-api-access-nz7dt\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226089 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8nq\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-kube-api-access-5g8nq\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226107 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt99b\" (UniqueName: \"kubernetes.io/projected/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-kube-api-access-gt99b\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226193 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226214 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrrp\" (UniqueName: \"kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226332 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226373 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226401 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a28b73f-bc1c-41f3-a275-37dd4e44b507-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226419 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226436 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2939e314-f46f-468e-8890-4ac369fc7482-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226467 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlzl\" (UniqueName: \"kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-serving-cert\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226568 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226587 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-policies\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttxw\" (UniqueName: \"kubernetes.io/projected/f0925731-afa5-4a9c-b72d-9806c16dab59-kube-api-access-cttxw\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226652 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthsd\" (UniqueName: \"kubernetes.io/projected/86948975-86fc-40c1-970b-1fa5d7860497-kube-api-access-tthsd\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppbv\" (UniqueName: \"kubernetes.io/projected/962ff4d2-ff0d-4e75-b04d-0d318c1980de-kube-api-access-dppbv\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226829 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a28b73f-bc1c-41f3-a275-37dd4e44b507-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226864 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-image-import-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-encryption-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226906 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226930 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226973 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d799ee04-948d-4c0b-8e84-f209d40380fc-serving-cert\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.226994 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227030 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-config\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227068 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227099 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227134 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx88\" (UniqueName: \"kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit-dir\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227195 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw64v\" (UniqueName: \"kubernetes.io/projected/9feafd03-8082-4d57-9611-602776ad0db6-kube-api-access-gw64v\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227216 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227236 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227254 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-serving-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227276 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnc7k\" (UniqueName: \"kubernetes.io/projected/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-kube-api-access-gnc7k\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-node-pullsecrets\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227318 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-serving-cert\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.227401 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.229676 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.236282 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.237055 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.237907 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mfl9q"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.240340 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.241870 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.242146 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.242541 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.242656 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.242761 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.242980 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243392 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243423 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243575 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243590 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243856 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243922 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243851 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.244364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.244472 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.244586 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.244666 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.243644 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.252279 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.253055 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.253268 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.253563 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.259398 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.260544 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.262775 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.264741 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.267745 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.269398 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.270009 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.270220 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.272545 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.273914 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ndpch"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.274583 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.275493 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.276204 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.276544 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.277006 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.279324 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.279995 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.280872 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.284962 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjsts"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.285478 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.285631 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.286024 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xz297"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.286207 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.285633 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.286648 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.286747 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.287106 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.287243 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.288204 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.288703 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.288990 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.289509 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.290416 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.293657 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-h8l7m"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.294597 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.300752 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.302984 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.303628 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.304787 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.305834 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ssq2z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.306031 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.308726 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mcf75"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.308816 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.309609 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.310080 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.318319 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.319458 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjd6w"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.320253 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.320876 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.321145 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xlwmt"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.322381 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tsnkp"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.323357 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.324326 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.325306 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.326378 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329844 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-client\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329889 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpfj\" (UniqueName: \"kubernetes.io/projected/d799ee04-948d-4c0b-8e84-f209d40380fc-kube-api-access-4vpfj\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329906 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86948975-86fc-40c1-970b-1fa5d7860497-machine-approver-tls\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329927 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-client\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-serving-cert\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329976 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.329992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330034 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-dir\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330065 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9feafd03-8082-4d57-9611-602776ad0db6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330081 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7s8\" (UniqueName: \"kubernetes.io/projected/d587d404-97ce-49d5-92f9-360d94d6d061-kube-api-access-vz7s8\") pod \"downloads-7954f5f757-hz6cv\" (UID: \"d587d404-97ce-49d5-92f9-360d94d6d061\") " pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330100 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330227 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-trusted-ca\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330346 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-config\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330387 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2939e314-f46f-468e-8890-4ac369fc7482-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330415 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330462 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt99b\" (UniqueName: \"kubernetes.io/projected/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-kube-api-access-gt99b\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330500 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7dt\" (UniqueName: \"kubernetes.io/projected/7a28b73f-bc1c-41f3-a275-37dd4e44b507-kube-api-access-nz7dt\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330517 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8nq\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-kube-api-access-5g8nq\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330531 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330558 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330574 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330591 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrrp\" (UniqueName: \"kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330612 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330634 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330697 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330720 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a28b73f-bc1c-41f3-a275-37dd4e44b507-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2939e314-f46f-468e-8890-4ac369fc7482-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330755 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlzl\" (UniqueName: \"kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330771 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330789 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/534241bd-3ed6-4365-b787-5c9c50967479-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330805 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vtw\" (UniqueName: \"kubernetes.io/projected/534241bd-3ed6-4365-b787-5c9c50967479-kube-api-access-m6vtw\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330821 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330856 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-serving-cert\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330873 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330890 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-policies\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a28b73f-bc1c-41f3-a275-37dd4e44b507-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330925 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttxw\" (UniqueName: \"kubernetes.io/projected/f0925731-afa5-4a9c-b72d-9806c16dab59-kube-api-access-cttxw\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330940 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthsd\" (UniqueName: \"kubernetes.io/projected/86948975-86fc-40c1-970b-1fa5d7860497-kube-api-access-tthsd\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330956 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.330974 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxgp\" (UniqueName: \"kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331004 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppbv\" (UniqueName: \"kubernetes.io/projected/962ff4d2-ff0d-4e75-b04d-0d318c1980de-kube-api-access-dppbv\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331034 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-image-import-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331053 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-encryption-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331085 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331100 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d799ee04-948d-4c0b-8e84-f209d40380fc-serving-cert\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-config\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjx88\" (UniqueName: \"kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331191 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331208 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331226 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331245 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw64v\" (UniqueName: \"kubernetes.io/projected/9feafd03-8082-4d57-9611-602776ad0db6-kube-api-access-gw64v\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331267 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit-dir\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331290 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnc7k\" (UniqueName: \"kubernetes.io/projected/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-kube-api-access-gnc7k\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331332 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331357 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331373 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-serving-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-node-pullsecrets\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-serving-cert\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331441 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/534241bd-3ed6-4365-b787-5c9c50967479-proxy-tls\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331461 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-auth-proxy-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331477 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331493 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-encryption-config\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331801 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.331858 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.332680 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.334626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.338922 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-config\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.341047 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.341813 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit-dir\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.342652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.343115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.343306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.343570 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.343617 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjsts"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.344371 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.344851 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-audit\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.345628 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-etcd-client\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.346530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.347120 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-policies\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.347631 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.347693 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.348266 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2939e314-f46f-468e-8890-4ac369fc7482-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.348595 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.348650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.348783 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-image-import-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.349185 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.349376 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-serving-ca\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350083 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350343 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-serving-cert\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350448 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/962ff4d2-ff0d-4e75-b04d-0d318c1980de-audit-dir\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350501 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-node-pullsecrets\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350542 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a28b73f-bc1c-41f3-a275-37dd4e44b507-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.350709 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.351295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-config\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.351343 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0925731-afa5-4a9c-b72d-9806c16dab59-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.351421 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-auth-proxy-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.351937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.352342 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.352667 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353044 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/962ff4d2-ff0d-4e75-b04d-0d318c1980de-encryption-config\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353339 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353353 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353562 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-encryption-config\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.353776 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c698z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.354700 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d799ee04-948d-4c0b-8e84-f209d40380fc-trusted-ca\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.355563 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bmgrw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.356392 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-serving-cert\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.356620 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a28b73f-bc1c-41f3-a275-37dd4e44b507-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.356678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.356651 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-etcd-client\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.356885 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86948975-86fc-40c1-970b-1fa5d7860497-config\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.357074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2939e314-f46f-468e-8890-4ac369fc7482-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.357265 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.357738 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-serving-cert\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.357793 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.358930 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.360291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d799ee04-948d-4c0b-8e84-f209d40380fc-serving-cert\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.361276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.362480 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86948975-86fc-40c1-970b-1fa5d7860497-machine-approver-tls\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.362772 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.363520 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9feafd03-8082-4d57-9611-602776ad0db6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.363573 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.365023 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.366175 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.367353 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xz297"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.369625 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hz6cv"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.369637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.370190 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.370481 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bmgrw"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.373726 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.373786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.373796 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.379459 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.380362 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ndpch"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.381424 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhcw8"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.382466 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.383658 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h8l7m"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.385663 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.385817 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.386821 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ssq2z"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.388359 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.389571 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8xkjk"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.391826 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8xkjk"] Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.391932 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.401358 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.420527 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432479 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432525 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/534241bd-3ed6-4365-b787-5c9c50967479-proxy-tls\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432639 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432686 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432711 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432731 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/534241bd-3ed6-4365-b787-5c9c50967479-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432750 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6vtw\" (UniqueName: \"kubernetes.io/projected/534241bd-3ed6-4365-b787-5c9c50967479-kube-api-access-m6vtw\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432774 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxgp\" (UniqueName: \"kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.432812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.434211 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.434953 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/534241bd-3ed6-4365-b787-5c9c50967479-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.435099 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.435567 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.435866 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.438874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.440008 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.441080 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.461126 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.481399 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.501595 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.521810 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.541579 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.561699 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.590302 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.597603 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/534241bd-3ed6-4365-b787-5c9c50967479-proxy-tls\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.601427 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.621748 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.641869 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.661900 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.681490 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.702493 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.720452 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.740931 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.760299 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.782004 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.801403 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.822094 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.844308 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.861099 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.881104 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.902482 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.921730 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.941151 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 12:29:17 crc kubenswrapper[4756]: I1124 12:29:17.961484 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.001907 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.023104 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.041616 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.062111 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.081929 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.101870 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.121089 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.142805 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.162417 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.181487 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.201538 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.221444 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.261282 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.280379 4756 request.go:700] Waited for 1.001837957s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.290493 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.302551 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.322771 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: E1124 12:29:18.344261 4756 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 12:29:18 crc kubenswrapper[4756]: E1124 12:29:18.344973 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert podName:f0925731-afa5-4a9c-b72d-9806c16dab59 nodeName:}" failed. No retries permitted until 2025-11-24 12:29:18.844914784 +0000 UTC m=+91.202428946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert") pod "authentication-operator-69f744f599-vh4jr" (UID: "f0925731-afa5-4a9c-b72d-9806c16dab59") : failed to sync secret cache: timed out waiting for the condition Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.349419 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.361063 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.381931 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.402563 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.421322 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.441844 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.462250 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.475259 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.475323 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.475290 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.475259 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.482070 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.501970 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.520788 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.541542 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.561912 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.581834 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.602605 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.631600 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.641814 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.662387 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.681647 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.702443 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.722054 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.740775 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.761422 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.780734 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.801536 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.822054 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.841308 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.858419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.861971 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.881352 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.901147 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.920526 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.940317 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.961047 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 12:29:18 crc kubenswrapper[4756]: I1124 12:29:18.982943 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.002259 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.021318 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.075022 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpfj\" (UniqueName: \"kubernetes.io/projected/d799ee04-948d-4c0b-8e84-f209d40380fc-kube-api-access-4vpfj\") pod \"console-operator-58897d9998-mjd6w\" (UID: \"d799ee04-948d-4c0b-8e84-f209d40380fc\") " pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.084389 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt99b\" (UniqueName: \"kubernetes.io/projected/2c4e2bff-ebd3-4ba3-aa22-605ad8193978-kube-api-access-gt99b\") pod \"openshift-config-operator-7777fb866f-c698z\" (UID: \"2c4e2bff-ebd3-4ba3-aa22-605ad8193978\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.097733 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7dt\" (UniqueName: \"kubernetes.io/projected/7a28b73f-bc1c-41f3-a275-37dd4e44b507-kube-api-access-nz7dt\") pod \"openshift-apiserver-operator-796bbdcf4f-6mk7j\" (UID: \"7a28b73f-bc1c-41f3-a275-37dd4e44b507\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.117785 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8nq\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-kube-api-access-5g8nq\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.122909 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.155926 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjx88\" (UniqueName: \"kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88\") pod \"route-controller-manager-6576b87f9c-cl7lz\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.156324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2939e314-f46f-468e-8890-4ac369fc7482-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6f6fd\" (UID: \"2939e314-f46f-468e-8890-4ac369fc7482\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.180486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrrp\" (UniqueName: \"kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp\") pod \"controller-manager-879f6c89f-mrnbw\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.199144 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.200917 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlzl\" (UniqueName: \"kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl\") pod \"oauth-openshift-558db77b4-nmhtt\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.232764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnc7k\" (UniqueName: \"kubernetes.io/projected/8c6400f9-d8a4-48da-986a-b9dd8bc96a82-kube-api-access-gnc7k\") pod \"apiserver-76f77b778f-tsnkp\" (UID: \"8c6400f9-d8a4-48da-986a-b9dd8bc96a82\") " pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.244951 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppbv\" (UniqueName: \"kubernetes.io/projected/962ff4d2-ff0d-4e75-b04d-0d318c1980de-kube-api-access-dppbv\") pod \"apiserver-7bbb656c7d-9kgtg\" (UID: \"962ff4d2-ff0d-4e75-b04d-0d318c1980de\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.257462 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7s8\" (UniqueName: \"kubernetes.io/projected/d587d404-97ce-49d5-92f9-360d94d6d061-kube-api-access-vz7s8\") pod \"downloads-7954f5f757-hz6cv\" (UID: \"d587d404-97ce-49d5-92f9-360d94d6d061\") " pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.280646 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.285399 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthsd\" (UniqueName: \"kubernetes.io/projected/86948975-86fc-40c1-970b-1fa5d7860497-kube-api-access-tthsd\") pod \"machine-approver-56656f9798-ppzl5\" (UID: \"86948975-86fc-40c1-970b-1fa5d7860497\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.296836 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttxw\" (UniqueName: \"kubernetes.io/projected/f0925731-afa5-4a9c-b72d-9806c16dab59-kube-api-access-cttxw\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.299177 4756 request.go:700] Waited for 1.950910963s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.322099 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.324223 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw64v\" (UniqueName: \"kubernetes.io/projected/9feafd03-8082-4d57-9611-602776ad0db6-kube-api-access-gw64v\") pod \"cluster-samples-operator-665b6dd947-vb2s9\" (UID: \"9feafd03-8082-4d57-9611-602776ad0db6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.345447 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.345639 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.360830 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.361357 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.381503 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.382507 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.384308 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.394495 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.401958 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.405533 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.412098 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:19 crc kubenswrapper[4756]: W1124 12:29:19.417746 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86948975_86fc_40c1_970b_1fa5d7860497.slice/crio-f492bad819cc1725ce48c805ca5e6779a62caca55a61c73fa6bf058b9738b8d4 WatchSource:0}: Error finding container f492bad819cc1725ce48c805ca5e6779a62caca55a61c73fa6bf058b9738b8d4: Status 404 returned error can't find the container with id f492bad819cc1725ce48c805ca5e6779a62caca55a61c73fa6bf058b9738b8d4 Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.421574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.435700 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjd6w"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.441307 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.458260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:19 crc kubenswrapper[4756]: W1124 12:29:19.472656 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd799ee04_948d_4c0b_8e84_f209d40380fc.slice/crio-f4a1e4c7d105956c967ae3a1f1eba8dcf3f336719778d18c028a281454fdd836 WatchSource:0}: Error finding container f4a1e4c7d105956c967ae3a1f1eba8dcf3f336719778d18c028a281454fdd836: Status 404 returned error can't find the container with id f4a1e4c7d105956c967ae3a1f1eba8dcf3f336719778d18c028a281454fdd836 Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.483837 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6vtw\" (UniqueName: \"kubernetes.io/projected/534241bd-3ed6-4365-b787-5c9c50967479-kube-api-access-m6vtw\") pod \"machine-config-controller-84d6567774-zd7wz\" (UID: \"534241bd-3ed6-4365-b787-5c9c50967479\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.500359 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c698z"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.508681 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxgp\" (UniqueName: \"kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp\") pod \"console-f9d7485db-srchr\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.522350 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.534288 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.535545 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0925731-afa5-4a9c-b72d-9806c16dab59-serving-cert\") pod \"authentication-operator-69f744f599-vh4jr\" (UID: \"f0925731-afa5-4a9c-b72d-9806c16dab59\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.545731 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.554759 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.562303 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569717 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-cabundle\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcdeb21e-79e4-4fbb-bd32-76678f306353-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569795 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kx7\" (UniqueName: \"kubernetes.io/projected/3a420a5d-a184-43dd-a25c-80c97502ce62-kube-api-access-79kx7\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnk6w\" (UniqueName: \"kubernetes.io/projected/4c411403-4c19-4c73-9d30-d818464db788-kube-api-access-bnk6w\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569885 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.569953 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-apiservice-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571527 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sslw\" (UniqueName: \"kubernetes.io/projected/8dbcb8da-17af-41eb-bfc0-c26853a43c34-kube-api-access-9sslw\") pod \"migrator-59844c95c7-dlzvw\" (UID: \"8dbcb8da-17af-41eb-bfc0-c26853a43c34\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571674 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00155282-9211-4df2-b258-105f5a0c8236-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-serving-cert\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571910 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-etcd-client\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571941 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a420a5d-a184-43dd-a25c-80c97502ce62-service-ca-bundle\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.571990 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttnqj\" (UniqueName: \"kubernetes.io/projected/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-kube-api-access-ttnqj\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.572017 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0672b72-0b66-434e-8930-4297ea0f3f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.572054 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.572405 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583326 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxt4\" (UniqueName: \"kubernetes.io/projected/a5112751-74d4-43c5-aa07-d8b794af0ae3-kube-api-access-5pxt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-default-certificate\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583545 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-config\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5112751-74d4-43c5-aa07-d8b794af0ae3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.583700 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585750 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4d6\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-key\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585854 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcdeb21e-79e4-4fbb-bd32-76678f306353-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00155282-9211-4df2-b258-105f5a0c8236-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grxn\" (UniqueName: \"kubernetes.io/projected/a0672b72-0b66-434e-8930-4297ea0f3f98-kube-api-access-9grxn\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585953 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvwp9\" (UniqueName: \"kubernetes.io/projected/d7fe174d-06d6-4000-9f8d-3142f627744e-kube-api-access-jvwp9\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.585988 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586016 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00155282-9211-4df2-b258-105f5a0c8236-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586107 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-service-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586143 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-metrics-certs\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586358 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-proxy-tls\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586947 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bdd\" (UniqueName: \"kubernetes.io/projected/9566baf3-f84d-4ef8-9d59-99a2eaefd041-kube-api-access-85bdd\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.586997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsk5\" (UniqueName: \"kubernetes.io/projected/dcdeb21e-79e4-4fbb-bd32-76678f306353-kube-api-access-ztsk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.587021 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7fe174d-06d6-4000-9f8d-3142f627744e-tmpfs\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.587046 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-config\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.587091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-config\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.587112 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.587153 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.593312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-images\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.593368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.594808 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.594891 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.594937 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5112751-74d4-43c5-aa07-d8b794af0ae3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.594962 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-images\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.594988 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-webhook-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.595030 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-stats-auth\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.600259 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.601067 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.101036658 +0000 UTC m=+92.458550800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.601802 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.603790 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.605595 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.626457 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.653212 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.661362 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.680744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.695934 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696244 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxt4\" (UniqueName: \"kubernetes.io/projected/a5112751-74d4-43c5-aa07-d8b794af0ae3-kube-api-access-5pxt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696310 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-node-bootstrap-token\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696347 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696363 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-certs\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696379 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnpx\" (UniqueName: \"kubernetes.io/projected/6139f45c-2a2e-4561-af2e-2451b1f7bc15-kube-api-access-zcnpx\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696399 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-default-certificate\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696416 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk55w\" (UniqueName: \"kubernetes.io/projected/87c12acc-f25d-4842-a473-832d6b998769-kube-api-access-hk55w\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696454 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-config\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696472 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264f14e7-1512-4403-9dbc-0c6af40ec87b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrxp\" (UniqueName: \"kubernetes.io/projected/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-kube-api-access-txrxp\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5112751-74d4-43c5-aa07-d8b794af0ae3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696552 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696574 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt4d6\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696598 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-key\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696622 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw95z\" (UniqueName: \"kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696649 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcdeb21e-79e4-4fbb-bd32-76678f306353-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696696 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvwp9\" (UniqueName: \"kubernetes.io/projected/d7fe174d-06d6-4000-9f8d-3142f627744e-kube-api-access-jvwp9\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696719 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-socket-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00155282-9211-4df2-b258-105f5a0c8236-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grxn\" (UniqueName: \"kubernetes.io/projected/a0672b72-0b66-434e-8930-4297ea0f3f98-kube-api-access-9grxn\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696786 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696809 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00155282-9211-4df2-b258-105f5a0c8236-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696845 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264f14e7-1512-4403-9dbc-0c6af40ec87b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-service-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696891 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-metrics-certs\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-metrics-tls\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696943 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-proxy-tls\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696961 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9js\" (UniqueName: \"kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696978 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85bdd\" (UniqueName: \"kubernetes.io/projected/9566baf3-f84d-4ef8-9d59-99a2eaefd041-kube-api-access-85bdd\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.696994 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnzj\" (UniqueName: \"kubernetes.io/projected/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-kube-api-access-fmnzj\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztsk5\" (UniqueName: \"kubernetes.io/projected/dcdeb21e-79e4-4fbb-bd32-76678f306353-kube-api-access-ztsk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697039 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7fe174d-06d6-4000-9f8d-3142f627744e-tmpfs\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697057 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-config\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697089 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-config\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697120 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-images\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.697386 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.197365248 +0000 UTC m=+92.554879390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697409 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-srv-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697434 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697452 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-srv-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697470 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6j5k\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-kube-api-access-x6j5k\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697490 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697507 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-profile-collector-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697523 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8ql\" (UniqueName: \"kubernetes.io/projected/5a60e484-5344-420c-8f60-aea62504ed10-kube-api-access-lq8ql\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697550 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697568 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697599 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/37bf3224-33a8-45ab-93fc-05a44ed3f535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697635 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5112751-74d4-43c5-aa07-d8b794af0ae3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697667 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-images\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-webhook-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697698 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdtj\" (UniqueName: \"kubernetes.io/projected/ee5a29c4-e735-4d02-9263-1789e47f9e75-kube-api-access-5gdtj\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.697715 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-stats-auth\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.698287 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-images\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699073 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00155282-9211-4df2-b258-105f5a0c8236-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699087 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fcc8\" (UniqueName: \"kubernetes.io/projected/37bf3224-33a8-45ab-93fc-05a44ed3f535-kube-api-access-6fcc8\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699134 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs96n\" (UniqueName: \"kubernetes.io/projected/21b90170-abc5-427d-92a4-d7f4240bb961-kube-api-access-hs96n\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699192 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699216 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-registration-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699240 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-cabundle\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699302 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcdeb21e-79e4-4fbb-bd32-76678f306353-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79kx7\" (UniqueName: \"kubernetes.io/projected/3a420a5d-a184-43dd-a25c-80c97502ce62-kube-api-access-79kx7\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699354 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-plugins-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699372 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnk6w\" (UniqueName: \"kubernetes.io/projected/4c411403-4c19-4c73-9d30-d818464db788-kube-api-access-bnk6w\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699407 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c12acc-f25d-4842-a473-832d6b998769-metrics-tls\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699422 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264f14e7-1512-4403-9dbc-0c6af40ec87b-config\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699452 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699477 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-apiservice-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699501 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f52de20-d009-4ede-b66a-ba6f395cd99b-cert\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699525 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l2m4\" (UniqueName: \"kubernetes.io/projected/1f52de20-d009-4ede-b66a-ba6f395cd99b-kube-api-access-2l2m4\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699563 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sslw\" (UniqueName: \"kubernetes.io/projected/8dbcb8da-17af-41eb-bfc0-c26853a43c34-kube-api-access-9sslw\") pod \"migrator-59844c95c7-dlzvw\" (UID: \"8dbcb8da-17af-41eb-bfc0-c26853a43c34\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699589 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00155282-9211-4df2-b258-105f5a0c8236-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699614 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-config-volume\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699651 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-serving-cert\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s65xf\" (UniqueName: \"kubernetes.io/projected/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-kube-api-access-s65xf\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699686 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-mountpoint-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699708 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-etcd-client\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699725 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a420a5d-a184-43dd-a25c-80c97502ce62-service-ca-bundle\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699741 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttnqj\" (UniqueName: \"kubernetes.io/projected/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-kube-api-access-ttnqj\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699756 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-csi-data-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699783 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6139f45c-2a2e-4561-af2e-2451b1f7bc15-serving-cert\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699801 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0672b72-0b66-434e-8930-4297ea0f3f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699817 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139f45c-2a2e-4561-af2e-2451b1f7bc15-config\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699885 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8d58\" (UniqueName: \"kubernetes.io/projected/86c61608-af41-4137-9fa6-d07c1e89f13d-kube-api-access-f8d58\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.699920 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.700525 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.702075 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-etcd-service-ca\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.703420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-default-certificate\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.704718 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7fe174d-06d6-4000-9f8d-3142f627744e-tmpfs\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.708547 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.208525873 +0000 UTC m=+92.566040015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.709328 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-config\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.710939 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c411403-4c19-4c73-9d30-d818464db788-config\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.711279 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.711463 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5112751-74d4-43c5-aa07-d8b794af0ae3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.712875 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5112751-74d4-43c5-aa07-d8b794af0ae3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.715288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcdeb21e-79e4-4fbb-bd32-76678f306353-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.715890 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0672b72-0b66-434e-8930-4297ea0f3f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.716224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00155282-9211-4df2-b258-105f5a0c8236-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.717414 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-config\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.718188 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-key\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.719019 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9566baf3-f84d-4ef8-9d59-99a2eaefd041-signing-cabundle\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.719605 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a0672b72-0b66-434e-8930-4297ea0f3f98-images\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.726332 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.726604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcdeb21e-79e4-4fbb-bd32-76678f306353-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.726657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-metrics-certs\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.727237 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.726247 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.728408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a420a5d-a184-43dd-a25c-80c97502ce62-service-ca-bundle\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.728894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.728982 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-webhook-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.729576 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-proxy-tls\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.730062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7fe174d-06d6-4000-9f8d-3142f627744e-apiservice-cert\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.733060 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a420a5d-a184-43dd-a25c-80c97502ce62-stats-auth\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.734706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.736094 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-etcd-client\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.744853 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.753047 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c411403-4c19-4c73-9d30-d818464db788-serving-cert\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.762965 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvwp9\" (UniqueName: \"kubernetes.io/projected/d7fe174d-06d6-4000-9f8d-3142f627744e-kube-api-access-jvwp9\") pod \"packageserver-d55dfcdfc-mlfk9\" (UID: \"d7fe174d-06d6-4000-9f8d-3142f627744e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.764136 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxt4\" (UniqueName: \"kubernetes.io/projected/a5112751-74d4-43c5-aa07-d8b794af0ae3-kube-api-access-5pxt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-dk9n6\" (UID: \"a5112751-74d4-43c5-aa07-d8b794af0ae3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.790112 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grxn\" (UniqueName: \"kubernetes.io/projected/a0672b72-0b66-434e-8930-4297ea0f3f98-kube-api-access-9grxn\") pod \"machine-api-operator-5694c8668f-xhcw8\" (UID: \"a0672b72-0b66-434e-8930-4297ea0f3f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801381 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801572 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-plugins-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801632 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c12acc-f25d-4842-a473-832d6b998769-metrics-tls\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801652 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264f14e7-1512-4403-9dbc-0c6af40ec87b-config\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l2m4\" (UniqueName: \"kubernetes.io/projected/1f52de20-d009-4ede-b66a-ba6f395cd99b-kube-api-access-2l2m4\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f52de20-d009-4ede-b66a-ba6f395cd99b-cert\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.801725 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.301695145 +0000 UTC m=+92.659209287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801857 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801907 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-config-volume\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s65xf\" (UniqueName: \"kubernetes.io/projected/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-kube-api-access-s65xf\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801957 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-mountpoint-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.801999 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-csi-data-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802023 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6139f45c-2a2e-4561-af2e-2451b1f7bc15-serving-cert\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802133 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802227 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139f45c-2a2e-4561-af2e-2451b1f7bc15-config\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802264 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8d58\" (UniqueName: \"kubernetes.io/projected/86c61608-af41-4137-9fa6-d07c1e89f13d-kube-api-access-f8d58\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802319 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-node-bootstrap-token\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.802370 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcnpx\" (UniqueName: \"kubernetes.io/projected/6139f45c-2a2e-4561-af2e-2451b1f7bc15-kube-api-access-zcnpx\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.803313 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.803338 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-certs\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.804115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-config-volume\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.804804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk55w\" (UniqueName: \"kubernetes.io/projected/87c12acc-f25d-4842-a473-832d6b998769-kube-api-access-hk55w\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.804848 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264f14e7-1512-4403-9dbc-0c6af40ec87b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.804897 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrxp\" (UniqueName: \"kubernetes.io/projected/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-kube-api-access-txrxp\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.804939 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw95z\" (UniqueName: \"kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.805051 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.805077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-socket-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.805725 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-csi-data-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.805143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264f14e7-1512-4403-9dbc-0c6af40ec87b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-socket-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-metrics-tls\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806745 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9js\" (UniqueName: \"kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806763 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnzj\" (UniqueName: \"kubernetes.io/projected/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-kube-api-access-fmnzj\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-srv-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806908 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-srv-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806930 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6j5k\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-kube-api-access-x6j5k\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806951 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.806986 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8ql\" (UniqueName: \"kubernetes.io/projected/5a60e484-5344-420c-8f60-aea62504ed10-kube-api-access-lq8ql\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807018 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-profile-collector-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807063 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807162 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807135 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-mountpoint-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807190 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/37bf3224-33a8-45ab-93fc-05a44ed3f535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807350 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807356 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807621 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdtj\" (UniqueName: \"kubernetes.io/projected/ee5a29c4-e735-4d02-9263-1789e47f9e75-kube-api-access-5gdtj\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807704 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fcc8\" (UniqueName: \"kubernetes.io/projected/37bf3224-33a8-45ab-93fc-05a44ed3f535-kube-api-access-6fcc8\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs96n\" (UniqueName: \"kubernetes.io/projected/21b90170-abc5-427d-92a4-d7f4240bb961-kube-api-access-hs96n\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807763 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807790 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-registration-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.807923 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-registration-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.808713 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6139f45c-2a2e-4561-af2e-2451b1f7bc15-config\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.808870 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5a60e484-5344-420c-8f60-aea62504ed10-plugins-dir\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.810145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.810868 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264f14e7-1512-4403-9dbc-0c6af40ec87b-config\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.811183 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.311138194 +0000 UTC m=+92.668652546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.811316 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.815740 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.819364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.819945 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.822621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f52de20-d009-4ede-b66a-ba6f395cd99b-cert\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.823113 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264f14e7-1512-4403-9dbc-0c6af40ec87b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.830243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-profile-collector-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.830335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.831924 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-node-bootstrap-token\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.831928 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hz6cv"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.848736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztsk5\" (UniqueName: \"kubernetes.io/projected/dcdeb21e-79e4-4fbb-bd32-76678f306353-kube-api-access-ztsk5\") pod \"kube-storage-version-migrator-operator-b67b599dd-kb79v\" (UID: \"dcdeb21e-79e4-4fbb-bd32-76678f306353\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.851212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.851791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.852612 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6139f45c-2a2e-4561-af2e-2451b1f7bc15-serving-cert\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.852800 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87c12acc-f25d-4842-a473-832d6b998769-metrics-tls\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.852962 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.853062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86c61608-af41-4137-9fa6-d07c1e89f13d-srv-cert\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.853618 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.854080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21b90170-abc5-427d-92a4-d7f4240bb961-srv-cert\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.854350 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ee5a29c4-e735-4d02-9263-1789e47f9e75-certs\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.855779 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/37bf3224-33a8-45ab-93fc-05a44ed3f535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.855810 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.857195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-metrics-tls\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.857554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.858607 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.860083 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.874809 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.875651 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.882032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt4d6\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.889177 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd"] Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.896312 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnk6w\" (UniqueName: \"kubernetes.io/projected/4c411403-4c19-4c73-9d30-d818464db788-kube-api-access-bnk6w\") pod \"etcd-operator-b45778765-xlwmt\" (UID: \"4c411403-4c19-4c73-9d30-d818464db788\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.902944 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79kx7\" (UniqueName: \"kubernetes.io/projected/3a420a5d-a184-43dd-a25c-80c97502ce62-kube-api-access-79kx7\") pod \"router-default-5444994796-mfl9q\" (UID: \"3a420a5d-a184-43dd-a25c-80c97502ce62\") " pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.911062 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:19 crc kubenswrapper[4756]: E1124 12:29:19.911733 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.411707723 +0000 UTC m=+92.769221875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.937077 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttnqj\" (UniqueName: \"kubernetes.io/projected/211bc6d3-e827-4440-ae4a-d418b9bfd3f6-kube-api-access-ttnqj\") pod \"machine-config-operator-74547568cd-bdnrj\" (UID: \"211bc6d3-e827-4440-ae4a-d418b9bfd3f6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.948803 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85bdd\" (UniqueName: \"kubernetes.io/projected/9566baf3-f84d-4ef8-9d59-99a2eaefd041-kube-api-access-85bdd\") pod \"service-ca-9c57cc56f-ndpch\" (UID: \"9566baf3-f84d-4ef8-9d59-99a2eaefd041\") " pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.965921 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sslw\" (UniqueName: \"kubernetes.io/projected/8dbcb8da-17af-41eb-bfc0-c26853a43c34-kube-api-access-9sslw\") pod \"migrator-59844c95c7-dlzvw\" (UID: \"8dbcb8da-17af-41eb-bfc0-c26853a43c34\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" Nov 24 12:29:19 crc kubenswrapper[4756]: I1124 12:29:19.990110 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45e1418-cf4c-43e5-a9b6-8ff35e5305b8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p6frm\" (UID: \"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.004395 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00155282-9211-4df2-b258-105f5a0c8236-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ch9ff\" (UID: \"00155282-9211-4df2-b258-105f5a0c8236\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.025087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.025469 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.525455509 +0000 UTC m=+92.882969651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.032942 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s65xf\" (UniqueName: \"kubernetes.io/projected/bcb119b7-a5d6-411a-8b0e-78d09d23c6b0-kube-api-access-s65xf\") pod \"multus-admission-controller-857f4d67dd-jjsts\" (UID: \"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.050999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrxp\" (UniqueName: \"kubernetes.io/projected/c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977-kube-api-access-txrxp\") pod \"package-server-manager-789f6589d5-bnnhq\" (UID: \"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.067548 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk55w\" (UniqueName: \"kubernetes.io/projected/87c12acc-f25d-4842-a473-832d6b998769-kube-api-access-hk55w\") pod \"dns-operator-744455d44c-ssq2z\" (UID: \"87c12acc-f25d-4842-a473-832d6b998769\") " pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.084384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw95z\" (UniqueName: \"kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z\") pod \"marketplace-operator-79b997595-xbhls\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.101609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264f14e7-1512-4403-9dbc-0c6af40ec87b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lnh7z\" (UID: \"264f14e7-1512-4403-9dbc-0c6af40ec87b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.103184 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hz6cv" event={"ID":"d587d404-97ce-49d5-92f9-360d94d6d061","Type":"ContainerStarted","Data":"b2084412ed61184e918f5dc9438b0b64a0d0fd689cdc3f28a17d503fc6431caf"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.112674 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.113020 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" event={"ID":"962ff4d2-ff0d-4e75-b04d-0d318c1980de","Type":"ContainerStarted","Data":"d36cc1dbb555d17fee720988d4206facd10874f671e2d6c6beee887e17d38df1"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.114765 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.117489 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9js\" (UniqueName: \"kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js\") pod \"collect-profiles-29399775-r6bnr\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.123186 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" event={"ID":"ea390924-dfd9-4c47-90f2-ca9d413e7c5f","Type":"ContainerStarted","Data":"c8856cf6758b553d60c114c05cc00b1e0c391231e50e160985fc196425687fac"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.128254 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.128955 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.628935529 +0000 UTC m=+92.986449671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.129452 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.135368 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" event={"ID":"76ac3240-bc3d-4688-9aa1-1976279a656d","Type":"ContainerStarted","Data":"35a95105bdfd65f9b175b660c5f303b4c62f873a0cbe3d3506c0513683eabb56"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.135433 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.135447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" event={"ID":"76ac3240-bc3d-4688-9aa1-1976279a656d","Type":"ContainerStarted","Data":"969ba4ed95555ba09c4b2ca06bd04a39676b65b8fc928966bd8370334701f36f"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.138788 4756 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mrnbw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.138859 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.143207 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" event={"ID":"2c4e2bff-ebd3-4ba3-aa22-605ad8193978","Type":"ContainerStarted","Data":"06367f6edc9dd5407dd9df3e326648910f281509653dde0d29a97d795f3be4fe"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.143255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" event={"ID":"2c4e2bff-ebd3-4ba3-aa22-605ad8193978","Type":"ContainerStarted","Data":"959c39669d9d1af8703fd6ae7c4d434fa134f0988caaaaf492b3e5c23a1217d6"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.146013 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8d58\" (UniqueName: \"kubernetes.io/projected/86c61608-af41-4137-9fa6-d07c1e89f13d-kube-api-access-f8d58\") pod \"catalog-operator-68c6474976-6ql8x\" (UID: \"86c61608-af41-4137-9fa6-d07c1e89f13d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.154083 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.162817 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdtj\" (UniqueName: \"kubernetes.io/projected/ee5a29c4-e735-4d02-9263-1789e47f9e75-kube-api-access-5gdtj\") pod \"machine-config-server-mcf75\" (UID: \"ee5a29c4-e735-4d02-9263-1789e47f9e75\") " pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.162936 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" event={"ID":"35a6cd00-6612-4277-9e8b-ed71bdb5e01d","Type":"ContainerStarted","Data":"ed8c8bb37da683e7ab95c3de0f37dbc3af7f895c0cbab6b5f2f77363f33a570b"} Nov 24 12:29:20 crc kubenswrapper[4756]: W1124 12:29:20.164494 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod534241bd_3ed6_4365_b787_5c9c50967479.slice/crio-29bd80e8552afd7202a847f2e8a131ab65c6490d7c467378e8ae5c84c5565d22 WatchSource:0}: Error finding container 29bd80e8552afd7202a847f2e8a131ab65c6490d7c467378e8ae5c84c5565d22: Status 404 returned error can't find the container with id 29bd80e8552afd7202a847f2e8a131ab65c6490d7c467378e8ae5c84c5565d22 Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.166510 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.173757 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" event={"ID":"2939e314-f46f-468e-8890-4ac369fc7482","Type":"ContainerStarted","Data":"ea569aa0511dbfbdfb23f1c607935cee144e3e47bf5dd7de7dab2baff67dfa70"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.180125 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.189567 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.195048 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.202229 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.204712 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" event={"ID":"d799ee04-948d-4c0b-8e84-f209d40380fc","Type":"ContainerStarted","Data":"dae18d95652f678e7dd866c0134b1c7aef644a4f698d23f180dcf0b16a9e875e"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.204866 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" event={"ID":"d799ee04-948d-4c0b-8e84-f209d40380fc","Type":"ContainerStarted","Data":"f4a1e4c7d105956c967ae3a1f1eba8dcf3f336719778d18c028a281454fdd836"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.204991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.205078 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.221495 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l2m4\" (UniqueName: \"kubernetes.io/projected/1f52de20-d009-4ede-b66a-ba6f395cd99b-kube-api-access-2l2m4\") pod \"ingress-canary-bmgrw\" (UID: \"1f52de20-d009-4ede-b66a-ba6f395cd99b\") " pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.223899 4756 patch_prober.go:28] interesting pod/console-operator-58897d9998-mjd6w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.225604 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.226225 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.223987 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" podUID="d799ee04-948d-4c0b-8e84-f209d40380fc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.235248 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.239117 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.73909202 +0000 UTC m=+93.096606162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.240742 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.245216 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.245821 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.246289 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.255874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnzj\" (UniqueName: \"kubernetes.io/projected/4cf127cc-52b2-4121-8f1d-b2a628f95ea4-kube-api-access-fmnzj\") pod \"dns-default-h8l7m\" (UID: \"4cf127cc-52b2-4121-8f1d-b2a628f95ea4\") " pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.262908 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" event={"ID":"86948975-86fc-40c1-970b-1fa5d7860497","Type":"ContainerStarted","Data":"875ccb9cb5ff580e2786a4cba8f59a5cd9f51905e118722394376218e2f69c9f"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.262966 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" event={"ID":"86948975-86fc-40c1-970b-1fa5d7860497","Type":"ContainerStarted","Data":"f492bad819cc1725ce48c805ca5e6779a62caca55a61c73fa6bf058b9738b8d4"} Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.267756 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.268596 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.270084 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6j5k\" (UniqueName: \"kubernetes.io/projected/ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3-kube-api-access-x6j5k\") pod \"ingress-operator-5b745b69d9-xz297\" (UID: \"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.275462 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.282072 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tsnkp"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.283023 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs96n\" (UniqueName: \"kubernetes.io/projected/21b90170-abc5-427d-92a4-d7f4240bb961-kube-api-access-hs96n\") pod \"olm-operator-6b444d44fb-6ks7x\" (UID: \"21b90170-abc5-427d-92a4-d7f4240bb961\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.287923 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8ql\" (UniqueName: \"kubernetes.io/projected/5a60e484-5344-420c-8f60-aea62504ed10-kube-api-access-lq8ql\") pod \"csi-hostpathplugin-8xkjk\" (UID: \"5a60e484-5344-420c-8f60-aea62504ed10\") " pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.288500 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.298729 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mcf75" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.305571 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcnpx\" (UniqueName: \"kubernetes.io/projected/6139f45c-2a2e-4561-af2e-2451b1f7bc15-kube-api-access-zcnpx\") pod \"service-ca-operator-777779d784-r5pv5\" (UID: \"6139f45c-2a2e-4561-af2e-2451b1f7bc15\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.306006 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bmgrw" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.327027 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.328049 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fcc8\" (UniqueName: \"kubernetes.io/projected/37bf3224-33a8-45ab-93fc-05a44ed3f535-kube-api-access-6fcc8\") pod \"control-plane-machine-set-operator-78cbb6b69f-v5ndr\" (UID: \"37bf3224-33a8-45ab-93fc-05a44ed3f535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.337947 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.340083 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.840062577 +0000 UTC m=+93.197576719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.353015 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh4jr"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.439670 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.440446 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:20.940427012 +0000 UTC m=+93.297941154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.529658 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.530789 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.539753 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.540751 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.540908 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.040881488 +0000 UTC m=+93.398395630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.541136 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.541656 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.041639334 +0000 UTC m=+93.399153476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.551921 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.569651 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.597608 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.622515 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xhcw8"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.643308 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.643736 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.143719145 +0000 UTC m=+93.501233287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.673517 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.745587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.746595 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.246580832 +0000 UTC m=+93.604094974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.760354 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.807931 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.813020 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.836380 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xlwmt"] Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.847466 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.847588 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.34755986 +0000 UTC m=+93.705073992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.847759 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.848257 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.348239434 +0000 UTC m=+93.705753576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: W1124 12:29:20.943503 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45e1418_cf4c_43e5_a9b6_8ff35e5305b8.slice/crio-aca50587541bfd3ec0d8b84ad0a5d667d7c214b2590d0bd61e171fd2bb24b9f0 WatchSource:0}: Error finding container aca50587541bfd3ec0d8b84ad0a5d667d7c214b2590d0bd61e171fd2bb24b9f0: Status 404 returned error can't find the container with id aca50587541bfd3ec0d8b84ad0a5d667d7c214b2590d0bd61e171fd2bb24b9f0 Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.951275 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.951739 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.451620352 +0000 UTC m=+93.809134494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:20 crc kubenswrapper[4756]: I1124 12:29:20.951970 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:20 crc kubenswrapper[4756]: E1124 12:29:20.953321 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.453288617 +0000 UTC m=+93.810802769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.054440 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.054850 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.554832456 +0000 UTC m=+93.912346598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.150600 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj"] Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.156809 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.157488 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.657467019 +0000 UTC m=+94.014981161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.181257 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ndpch"] Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.259237 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.259631 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.759560239 +0000 UTC m=+94.117074381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.261722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.262361 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.762351798 +0000 UTC m=+94.119865930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.312564 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" event={"ID":"8dbcb8da-17af-41eb-bfc0-c26853a43c34","Type":"ContainerStarted","Data":"93ecf669473750bc5be8ecd8ea777ffeb193519afc578c6c38f32b88a4850619"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.344108 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" event={"ID":"f0925731-afa5-4a9c-b72d-9806c16dab59","Type":"ContainerStarted","Data":"8747ae9a4fb24f70720ca5a06d038352c6364e8dfff4282fcf68f18953f543ed"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.353107 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" event={"ID":"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8","Type":"ContainerStarted","Data":"aca50587541bfd3ec0d8b84ad0a5d667d7c214b2590d0bd61e171fd2bb24b9f0"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.362783 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.363264 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.863241434 +0000 UTC m=+94.220755576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.400527 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" event={"ID":"9feafd03-8082-4d57-9611-602776ad0db6","Type":"ContainerStarted","Data":"91bc95ac71cc85ddd66e786e8b80646c8f752145388f622a50188d6b6791ed05"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.412582 4756 generic.go:334] "Generic (PLEG): container finished" podID="962ff4d2-ff0d-4e75-b04d-0d318c1980de" containerID="f31a695c06f8d3f101ac0b6913038741ae46e9ca2063907b6cccf7658165e9af" exitCode=0 Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.413412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" event={"ID":"962ff4d2-ff0d-4e75-b04d-0d318c1980de","Type":"ContainerDied","Data":"f31a695c06f8d3f101ac0b6913038741ae46e9ca2063907b6cccf7658165e9af"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.419994 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mcf75" event={"ID":"ee5a29c4-e735-4d02-9263-1789e47f9e75","Type":"ContainerStarted","Data":"19548ab4429bd466e7339964b5bb90bd9e6e977557ec3f5a07716e50f40fc2b1"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.421674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" event={"ID":"d7fe174d-06d6-4000-9f8d-3142f627744e","Type":"ContainerStarted","Data":"41339a48a3395660c67a78f5b75b9ab876f47b0be55256fc00598d0a60263230"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.437126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" event={"ID":"2939e314-f46f-468e-8890-4ac369fc7482","Type":"ContainerStarted","Data":"a5ec0b6d5bbab41dfd96fb72cf289e0b7c550071cbef46a8344868c49a451338"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.440395 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" event={"ID":"ea390924-dfd9-4c47-90f2-ca9d413e7c5f","Type":"ContainerStarted","Data":"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.440985 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.442881 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" event={"ID":"534241bd-3ed6-4365-b787-5c9c50967479","Type":"ContainerStarted","Data":"29bd80e8552afd7202a847f2e8a131ab65c6490d7c467378e8ae5c84c5565d22"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.448365 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c4e2bff-ebd3-4ba3-aa22-605ad8193978" containerID="06367f6edc9dd5407dd9df3e326648910f281509653dde0d29a97d795f3be4fe" exitCode=0 Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.448429 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" event={"ID":"2c4e2bff-ebd3-4ba3-aa22-605ad8193978","Type":"ContainerDied","Data":"06367f6edc9dd5407dd9df3e326648910f281509653dde0d29a97d795f3be4fe"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.448456 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" event={"ID":"2c4e2bff-ebd3-4ba3-aa22-605ad8193978","Type":"ContainerStarted","Data":"6f6db699e34fca73532af11e806bf91c5f01f147b3ce2d6181f3cfe57d4a668d"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.449477 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.464824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.465179 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:21.965166591 +0000 UTC m=+94.322680733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.483507 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" event={"ID":"8c6400f9-d8a4-48da-986a-b9dd8bc96a82","Type":"ContainerStarted","Data":"216b13b7d5364c540384df258230f361c83dc6bf6af65b9c8803e4513bbe7c3a"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.488593 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mfl9q" event={"ID":"3a420a5d-a184-43dd-a25c-80c97502ce62","Type":"ContainerStarted","Data":"903f4ef0e8d432a883f5108e34b7faef75ad990d2f7ce7041be14f6d327477e9"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.508673 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" event={"ID":"4c411403-4c19-4c73-9d30-d818464db788","Type":"ContainerStarted","Data":"6b176b9e803942e43b32dcba5acd0293f12b7bf5957551294e49c9ab4684bb56"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.519800 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" event={"ID":"a0672b72-0b66-434e-8930-4297ea0f3f98","Type":"ContainerStarted","Data":"ee7d08c6202b5f3dd7282e9e1cda2ff5301bac17fb644cf4615b7cdfb6b5717f"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.548576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-srchr" event={"ID":"874bfcf4-b717-4ee9-932f-8b28a2b68eac","Type":"ContainerStarted","Data":"fec9f3d7fc9f16020397b44b94aee0650cf5be3fa359ab9ea6d8bbbbc3e303b7"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.552306 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" event={"ID":"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977","Type":"ContainerStarted","Data":"5da9baddeff52e13000528e08c5c0e42346533e6e786777cd854a197e91ee6fd"} Nov 24 12:29:21 crc kubenswrapper[4756]: W1124 12:29:21.560671 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211bc6d3_e827_4440_ae4a_d418b9bfd3f6.slice/crio-5666d9b5ae1cfaa48f3e49a853f9673206edda04c4c811c65c1a8ce4af4a8983 WatchSource:0}: Error finding container 5666d9b5ae1cfaa48f3e49a853f9673206edda04c4c811c65c1a8ce4af4a8983: Status 404 returned error can't find the container with id 5666d9b5ae1cfaa48f3e49a853f9673206edda04c4c811c65c1a8ce4af4a8983 Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.565228 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" event={"ID":"dcdeb21e-79e4-4fbb-bd32-76678f306353","Type":"ContainerStarted","Data":"02a647d18cd8fa8712828c520aa182a1dc0fa8670d9602a069d4bcb0f0b3da1d"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.571183 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.572931 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.072897531 +0000 UTC m=+94.430411673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.577486 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.580267 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.080246055 +0000 UTC m=+94.437760197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.584342 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.590825 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" event={"ID":"7a28b73f-bc1c-41f3-a275-37dd4e44b507","Type":"ContainerStarted","Data":"879db78318bbc30378620bb817056270423aad363f56d82664178e4dfd670eca"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.614293 4756 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nmhtt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.614370 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.632480 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hz6cv" event={"ID":"d587d404-97ce-49d5-92f9-360d94d6d061","Type":"ContainerStarted","Data":"ec639a1cf0f0b2a86f28d70cde73b2e1781e52862d324e201732bc9e59ee0d53"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.633747 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.653583 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" podStartSLOduration=73.65355863 podStartE2EDuration="1m13.65355863s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:21.637472781 +0000 UTC m=+93.994986923" watchObservedRunningTime="2025-11-24 12:29:21.65355863 +0000 UTC m=+94.011072782" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.662762 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.687182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.689937 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" event={"ID":"86948975-86fc-40c1-970b-1fa5d7860497","Type":"ContainerStarted","Data":"34c47ed8a300d4540fba6b0acd9a94557a88a1677e223b97c1e05de1aa1cec2c"} Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.691124 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.191098201 +0000 UTC m=+94.548612343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.700290 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" event={"ID":"a5112751-74d4-43c5-aa07-d8b794af0ae3","Type":"ContainerStarted","Data":"24666b7c93f7a512f791a2748c3be781648b360ca6b63bc42aa8161a09c181be"} Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.724494 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" podStartSLOduration=73.724467274 podStartE2EDuration="1m13.724467274s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:21.723390401 +0000 UTC m=+94.080904563" watchObservedRunningTime="2025-11-24 12:29:21.724467274 +0000 UTC m=+94.081981416" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.754097 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz6cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.754149 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz6cv" podUID="d587d404-97ce-49d5-92f9-360d94d6d061" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.756514 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mjd6w" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.757886 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.793090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.793633 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.293618771 +0000 UTC m=+94.651132913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:21 crc kubenswrapper[4756]: I1124 12:29:21.894821 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:21 crc kubenswrapper[4756]: E1124 12:29:21.897656 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.397631862 +0000 UTC m=+94.755146004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.001309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.002138 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.502119934 +0000 UTC m=+94.859634076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.083339 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ssq2z"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.103979 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.104533 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.60448535 +0000 UTC m=+94.961999482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.155147 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bmgrw"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.206683 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.207168 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.707138003 +0000 UTC m=+95.064652145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.278504 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" podStartSLOduration=73.278474626 podStartE2EDuration="1m13.278474626s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.257182597 +0000 UTC m=+94.614696729" watchObservedRunningTime="2025-11-24 12:29:22.278474626 +0000 UTC m=+94.635988768" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.319894 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.320218 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.820177685 +0000 UTC m=+95.177691827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.322232 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.321319 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.323879 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.823826301 +0000 UTC m=+95.181340443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.325373 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" podStartSLOduration=74.325342713 podStartE2EDuration="1m14.325342713s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.312593095 +0000 UTC m=+94.670107237" watchObservedRunningTime="2025-11-24 12:29:22.325342713 +0000 UTC m=+94.682856855" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.335620 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.339558 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.382300 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" podStartSLOduration=73.382279203 podStartE2EDuration="1m13.382279203s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.343001255 +0000 UTC m=+94.700515397" watchObservedRunningTime="2025-11-24 12:29:22.382279203 +0000 UTC m=+94.739793345" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.384987 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.394847 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hz6cv" podStartSLOduration=74.394822217 podStartE2EDuration="1m14.394822217s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.393844297 +0000 UTC m=+94.751358449" watchObservedRunningTime="2025-11-24 12:29:22.394822217 +0000 UTC m=+94.752336359" Nov 24 12:29:22 crc kubenswrapper[4756]: W1124 12:29:22.410665 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff06ae8_376c_4319_9358_200e6f312237.slice/crio-7ccb6e20c53fc927537b21f5ef128c76d74d1b9ebb8384f27d46808d508504da WatchSource:0}: Error finding container 7ccb6e20c53fc927537b21f5ef128c76d74d1b9ebb8384f27d46808d508504da: Status 404 returned error can't find the container with id 7ccb6e20c53fc927537b21f5ef128c76d74d1b9ebb8384f27d46808d508504da Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.427379 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.427737 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:22.92772016 +0000 UTC m=+95.285234292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.441039 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ppzl5" podStartSLOduration=74.44101461 podStartE2EDuration="1m14.44101461s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.438052378 +0000 UTC m=+94.795566540" watchObservedRunningTime="2025-11-24 12:29:22.44101461 +0000 UTC m=+94.798528752" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.499737 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8xkjk"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.530176 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.530571 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.030557667 +0000 UTC m=+95.388071809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: W1124 12:29:22.530652 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a60e484_5344_420c_8f60_aea62504ed10.slice/crio-c2f0e71bbe81d0bb9699447bc93b4e125dc7524b01723dcd2f11b0cf4158a85a WatchSource:0}: Error finding container c2f0e71bbe81d0bb9699447bc93b4e125dc7524b01723dcd2f11b0cf4158a85a: Status 404 returned error can't find the container with id c2f0e71bbe81d0bb9699447bc93b4e125dc7524b01723dcd2f11b0cf4158a85a Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.533230 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h8l7m"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.540003 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" podStartSLOduration=74.539966325 podStartE2EDuration="1m14.539966325s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.522119479 +0000 UTC m=+94.879633631" watchObservedRunningTime="2025-11-24 12:29:22.539966325 +0000 UTC m=+94.897480467" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.595124 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.635638 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.635708 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.636592 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.136512499 +0000 UTC m=+95.494026641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.647670 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xz297"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.673964 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjsts"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.693077 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6f6fd" podStartSLOduration=74.69305781 podStartE2EDuration="1m14.69305781s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.674644663 +0000 UTC m=+95.032158805" watchObservedRunningTime="2025-11-24 12:29:22.69305781 +0000 UTC m=+95.050571952" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.713136 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5"] Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.723854 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x"] Nov 24 12:29:22 crc kubenswrapper[4756]: W1124 12:29:22.727564 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf127cc_52b2_4121_8f1d_b2a628f95ea4.slice/crio-b81d2b25cce089d5b289638e23f073402d0fb44009cbbb215ff7e28ce8718b7f WatchSource:0}: Error finding container b81d2b25cce089d5b289638e23f073402d0fb44009cbbb215ff7e28ce8718b7f: Status 404 returned error can't find the container with id b81d2b25cce089d5b289638e23f073402d0fb44009cbbb215ff7e28ce8718b7f Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.742069 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.742523 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.242506212 +0000 UTC m=+95.600020354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.765375 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" event={"ID":"00155282-9211-4df2-b258-105f5a0c8236","Type":"ContainerStarted","Data":"c20c24511ee99717c3d5eedb125fc35ff18a6aad4c895b869f71c3b0a2a58cc5"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.771305 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" event={"ID":"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977","Type":"ContainerStarted","Data":"d9c3919b186b56013c2863f27952652339e3cb051038da1d98a93f8a1fc93c73"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.795784 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mcf75" event={"ID":"ee5a29c4-e735-4d02-9263-1789e47f9e75","Type":"ContainerStarted","Data":"399e8084ddee7ead9914fadceede42966197fca687d5363666d9d882690f591d"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.824619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" event={"ID":"a5112751-74d4-43c5-aa07-d8b794af0ae3","Type":"ContainerStarted","Data":"4c6d2002c414ec6e51fd0318981c167e375ee46039ea7055ab71e3c83de87cc4"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.854375 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" podStartSLOduration=74.854342298 podStartE2EDuration="1m14.854342298s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.745629418 +0000 UTC m=+95.103143740" watchObservedRunningTime="2025-11-24 12:29:22.854342298 +0000 UTC m=+95.211856440" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.854570 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mcf75" podStartSLOduration=5.854564583 podStartE2EDuration="5.854564583s" podCreationTimestamp="2025-11-24 12:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.838311001 +0000 UTC m=+95.195825153" watchObservedRunningTime="2025-11-24 12:29:22.854564583 +0000 UTC m=+95.212078725" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.854754 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.857709 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" event={"ID":"86c61608-af41-4137-9fa6-d07c1e89f13d","Type":"ContainerStarted","Data":"bc8a394ad99fda1e8b09d2e69ed93569bf8537c0ff28e1c7d5b626c8686ae423"} Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.859297 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.359249662 +0000 UTC m=+95.716763804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.916284 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" event={"ID":"87c12acc-f25d-4842-a473-832d6b998769","Type":"ContainerStarted","Data":"4cf941553af7b7d70044c30637463a5914267a7d5f406a25931b81ef584ccc66"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.927072 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kb79v" event={"ID":"dcdeb21e-79e4-4fbb-bd32-76678f306353","Type":"ContainerStarted","Data":"4522ff820dddb457dfaf69aaec8d004e7534f959bfd04c1e478731f43d719cf8"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.936714 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dk9n6" podStartSLOduration=74.936690253 podStartE2EDuration="1m14.936690253s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:22.935184792 +0000 UTC m=+95.292698954" watchObservedRunningTime="2025-11-24 12:29:22.936690253 +0000 UTC m=+95.294204395" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.974031 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.974482 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" event={"ID":"9566baf3-f84d-4ef8-9d59-99a2eaefd041","Type":"ContainerStarted","Data":"6c9dcf93453bee981051a216304321591e9456fd99dae00be3d4bb7e1f0ed023"} Nov 24 12:29:22 crc kubenswrapper[4756]: I1124 12:29:22.977878 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" event={"ID":"9566baf3-f84d-4ef8-9d59-99a2eaefd041","Type":"ContainerStarted","Data":"aedc78abad6bff5c2425b1ca4deb12f57623e7f3e0ca42ff2d06f51dd44a984d"} Nov 24 12:29:22 crc kubenswrapper[4756]: E1124 12:29:22.976439 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.47641514 +0000 UTC m=+95.833929282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.019274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" event={"ID":"c45e1418-cf4c-43e5-a9b6-8ff35e5305b8","Type":"ContainerStarted","Data":"b011abca1c822eb78a3b54ac4d1053460114f463cd8c2b611a85433f0b8a5d96"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.036247 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bmgrw" event={"ID":"1f52de20-d009-4ede-b66a-ba6f395cd99b","Type":"ContainerStarted","Data":"9670e3412124a1fac8a25cae518c25a29d7abe469cccc9a84958c1444aee535e"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.085118 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.086688 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.586629972 +0000 UTC m=+95.944144104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.112823 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bmgrw" podStartSLOduration=6.112794073 podStartE2EDuration="6.112794073s" podCreationTimestamp="2025-11-24 12:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.100409412 +0000 UTC m=+95.457923554" watchObservedRunningTime="2025-11-24 12:29:23.112794073 +0000 UTC m=+95.470308215" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.113280 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ndpch" podStartSLOduration=74.113270833 podStartE2EDuration="1m14.113270833s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.054003595 +0000 UTC m=+95.411517737" watchObservedRunningTime="2025-11-24 12:29:23.113270833 +0000 UTC m=+95.470785435" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.116868 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" event={"ID":"534241bd-3ed6-4365-b787-5c9c50967479","Type":"ContainerStarted","Data":"9741b6e67c982c4860a4d215262ef6404097a7aa111df18d3f568946a756e2ee"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.116930 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" event={"ID":"534241bd-3ed6-4365-b787-5c9c50967479","Type":"ContainerStarted","Data":"6c2a0ffaf5da27564d258fef8facdce8d2095b16ab3b2252c91ca6fd672264b1"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.138943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" event={"ID":"3ff06ae8-376c-4319-9358-200e6f312237","Type":"ContainerStarted","Data":"7ccb6e20c53fc927537b21f5ef128c76d74d1b9ebb8384f27d46808d508504da"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.155436 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" event={"ID":"8dbcb8da-17af-41eb-bfc0-c26853a43c34","Type":"ContainerStarted","Data":"29454b2188b5d1a4e89f27e63d22eb5a0b33765d1e6d08a361d25415e26fa0a0"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.169302 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" event={"ID":"4c411403-4c19-4c73-9d30-d818464db788","Type":"ContainerStarted","Data":"0f6e1d48548e9a65794a337be4847938743089059dab029aebd05090fa306a80"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.190963 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.191429 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p6frm" podStartSLOduration=75.19140655 podStartE2EDuration="1m15.19140655s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.173309908 +0000 UTC m=+95.530824060" watchObservedRunningTime="2025-11-24 12:29:23.19140655 +0000 UTC m=+95.548920692" Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.193733 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.693719058 +0000 UTC m=+96.051233200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.218571 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xlwmt" podStartSLOduration=75.218552141 podStartE2EDuration="1m15.218552141s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.216678572 +0000 UTC m=+95.574192714" watchObservedRunningTime="2025-11-24 12:29:23.218552141 +0000 UTC m=+95.576066283" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.242435 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" event={"ID":"9feafd03-8082-4d57-9611-602776ad0db6","Type":"ContainerStarted","Data":"193bbccf598969ca2af25358fa6d0b287e16e16da75bfbc1d94ab6eaa1f59d22"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.266659 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" podStartSLOduration=74.266638035 podStartE2EDuration="1m14.266638035s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.261386694 +0000 UTC m=+95.618900836" watchObservedRunningTime="2025-11-24 12:29:23.266638035 +0000 UTC m=+95.624152177" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.268424 4756 generic.go:334] "Generic (PLEG): container finished" podID="8c6400f9-d8a4-48da-986a-b9dd8bc96a82" containerID="c7d4fee000c217adddff9a8a68a23b8969525cd80b9f67cd1dc2aef7f6cf6f87" exitCode=0 Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.269658 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" event={"ID":"8c6400f9-d8a4-48da-986a-b9dd8bc96a82","Type":"ContainerDied","Data":"c7d4fee000c217adddff9a8a68a23b8969525cd80b9f67cd1dc2aef7f6cf6f87"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.293060 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.294572 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.794516542 +0000 UTC m=+96.152030844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.302233 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zd7wz" podStartSLOduration=74.302210964 podStartE2EDuration="1m14.302210964s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.300439727 +0000 UTC m=+95.657953879" watchObservedRunningTime="2025-11-24 12:29:23.302210964 +0000 UTC m=+95.659725106" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.409860 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" event={"ID":"211bc6d3-e827-4440-ae4a-d418b9bfd3f6","Type":"ContainerStarted","Data":"d58351bba1757bf170ec8ae3ef4e1b62dda5a46b10e140198ba6786ff14317cc"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.410321 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" event={"ID":"211bc6d3-e827-4440-ae4a-d418b9bfd3f6","Type":"ContainerStarted","Data":"5666d9b5ae1cfaa48f3e49a853f9673206edda04c4c811c65c1a8ce4af4a8983"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.413777 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.414420 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:23.914400407 +0000 UTC m=+96.271914549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.474066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" event={"ID":"35a6cd00-6612-4277-9e8b-ed71bdb5e01d","Type":"ContainerStarted","Data":"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.509199 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" event={"ID":"264f14e7-1512-4403-9dbc-0c6af40ec87b","Type":"ContainerStarted","Data":"f765ad4ce1d051cc49172e7fd5e5b928f2a9aa8eb0d841294df0598d1ee86156"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.515126 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.518776 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.018754905 +0000 UTC m=+96.376269047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.535338 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" event={"ID":"5a60e484-5344-420c-8f60-aea62504ed10","Type":"ContainerStarted","Data":"c2f0e71bbe81d0bb9699447bc93b4e125dc7524b01723dcd2f11b0cf4158a85a"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.551053 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" event={"ID":"962ff4d2-ff0d-4e75-b04d-0d318c1980de","Type":"ContainerStarted","Data":"7765b2bd37b547a5560c0e8cb39d62019e66aa0dfffe273f599c8f03ff36ef06"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.579632 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" event={"ID":"f0925731-afa5-4a9c-b72d-9806c16dab59","Type":"ContainerStarted","Data":"ae47fadceaa785afd457623f4c764f4cb5b725a5f96f78461aeab070ef444496"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.590470 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" podStartSLOduration=74.590446555 podStartE2EDuration="1m14.590446555s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.588866282 +0000 UTC m=+95.946380434" watchObservedRunningTime="2025-11-24 12:29:23.590446555 +0000 UTC m=+95.947960697" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.591202 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" podStartSLOduration=74.591192571 podStartE2EDuration="1m14.591192571s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.471376157 +0000 UTC m=+95.828890309" watchObservedRunningTime="2025-11-24 12:29:23.591192571 +0000 UTC m=+95.948706713" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.624596 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.625601 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.125581385 +0000 UTC m=+96.483095527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.655066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6mk7j" event={"ID":"7a28b73f-bc1c-41f3-a275-37dd4e44b507","Type":"ContainerStarted","Data":"de6b1e87f209c7f78bee2e1b27cf9e67fe2a8777465759b123fa9f7a7ae4fbcd"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.715560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" event={"ID":"a0672b72-0b66-434e-8930-4297ea0f3f98","Type":"ContainerStarted","Data":"36d5bbcfed1045ea9da06bdaa9684e41eb9619a2a24c9159a7dff63fc70f1035"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.727967 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.729279 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.22925736 +0000 UTC m=+96.586771502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.731862 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-srchr" event={"ID":"874bfcf4-b717-4ee9-932f-8b28a2b68eac","Type":"ContainerStarted","Data":"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.766729 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mfl9q" event={"ID":"3a420a5d-a184-43dd-a25c-80c97502ce62","Type":"ContainerStarted","Data":"624f5fffe3d50f89e04a5cf271e5f335b61ae8174b11e937045a35577198eefb"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.774483 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" podStartSLOduration=74.774458882 podStartE2EDuration="1m14.774458882s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.769275393 +0000 UTC m=+96.126789535" watchObservedRunningTime="2025-11-24 12:29:23.774458882 +0000 UTC m=+96.131973034" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.775125 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh4jr" podStartSLOduration=75.775118456 podStartE2EDuration="1m15.775118456s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.649965889 +0000 UTC m=+96.007480031" watchObservedRunningTime="2025-11-24 12:29:23.775118456 +0000 UTC m=+96.132632598" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.830868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.833735 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.33370808 +0000 UTC m=+96.691222232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.856491 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" event={"ID":"d7fe174d-06d6-4000-9f8d-3142f627744e","Type":"ContainerStarted","Data":"d129274b6fdffc3425276bf25ff272a7f337a5986747ccc153ee724cb735b9b1"} Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.856566 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.856883 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-srchr" podStartSLOduration=75.856825047 podStartE2EDuration="1m15.856825047s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.83751834 +0000 UTC m=+96.195032502" watchObservedRunningTime="2025-11-24 12:29:23.856825047 +0000 UTC m=+96.214339209" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.867918 4756 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mlfk9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.868046 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" podUID="d7fe174d-06d6-4000-9f8d-3142f627744e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.869545 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz6cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.869657 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz6cv" podUID="d587d404-97ce-49d5-92f9-360d94d6d061" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.935116 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:23 crc kubenswrapper[4756]: E1124 12:29:23.935726 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.435700709 +0000 UTC m=+96.793214851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.996198 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mfl9q" podStartSLOduration=75.996174923 podStartE2EDuration="1m15.996174923s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.920640892 +0000 UTC m=+96.278155044" watchObservedRunningTime="2025-11-24 12:29:23.996174923 +0000 UTC m=+96.353689065" Nov 24 12:29:23 crc kubenswrapper[4756]: I1124 12:29:23.996323 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" podStartSLOduration=74.996317676 podStartE2EDuration="1m14.996317676s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:23.99462377 +0000 UTC m=+96.352137902" watchObservedRunningTime="2025-11-24 12:29:23.996317676 +0000 UTC m=+96.353831818" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.044931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.048346 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.548331302 +0000 UTC m=+96.905845434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.146000 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.146927 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.646870628 +0000 UTC m=+97.004384760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.154032 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.154509 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.654492389 +0000 UTC m=+97.012006531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.181752 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.188620 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:24 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:24 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:24 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.188697 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.255471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.255912 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.755892595 +0000 UTC m=+97.113406727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.283920 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.284398 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.362249 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.363003 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.862986381 +0000 UTC m=+97.220500523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.462697 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.463204 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.463369 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.963354346 +0000 UTC m=+97.320868488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.463444 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.463744 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:24.963736814 +0000 UTC m=+97.321250956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.489261 4756 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nmhtt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.489332 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.568576 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.568610 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.068584453 +0000 UTC m=+97.426098595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.568775 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.569186 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.069171325 +0000 UTC m=+97.426685467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.673348 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.674039 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.174011664 +0000 UTC m=+97.531525796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.775280 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.775839 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.275813989 +0000 UTC m=+97.633328301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.876329 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.876610 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.376587782 +0000 UTC m=+97.734101914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.877131 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bdnrj" event={"ID":"211bc6d3-e827-4440-ae4a-d418b9bfd3f6","Type":"ContainerStarted","Data":"69a1c15509a6bb485933b870a7ed798475e4979f1d6c834cb9c07d5e2e0a9f2e"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.906479 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" event={"ID":"036136f7-02ff-449a-9367-0cf354821811","Type":"ContainerStarted","Data":"cbed0566e43b91329f5ea5dae931dc6ad1d7daec6e9c5c6fc1d0251cc43ab9b2"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.906540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" event={"ID":"036136f7-02ff-449a-9367-0cf354821811","Type":"ContainerStarted","Data":"b17eea37517024d3888d59ad154c789f70fcf15a82820143d9bc0ec3d6f42444"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.929176 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" event={"ID":"264f14e7-1512-4403-9dbc-0c6af40ec87b","Type":"ContainerStarted","Data":"f73f59a7242c5a6e7470f77e74b00477c25500f290b488d76d94714325035a89"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.943028 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" podStartSLOduration=76.943005281 podStartE2EDuration="1m16.943005281s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:24.942364658 +0000 UTC m=+97.299878790" watchObservedRunningTime="2025-11-24 12:29:24.943005281 +0000 UTC m=+97.300519423" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.957078 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" event={"ID":"c9a2e2ea-e1b4-44b1-8a2d-cb0bb70b5977","Type":"ContainerStarted","Data":"0bf70e37887bd14cdfab2573285bdae41bb18450ce884b1afffc0f75744ec83e"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.957997 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.971503 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" event={"ID":"3ff06ae8-376c-4319-9358-200e6f312237","Type":"ContainerStarted","Data":"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511"} Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.973364 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.978326 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:24 crc kubenswrapper[4756]: E1124 12:29:24.978825 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.478806655 +0000 UTC m=+97.836320797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.979125 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xbhls container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 24 12:29:24 crc kubenswrapper[4756]: I1124 12:29:24.979184 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.020803 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xhcw8" event={"ID":"a0672b72-0b66-434e-8930-4297ea0f3f98","Type":"ContainerStarted","Data":"e522373088b40cd796668d44275deccbde93195ffc1b1255668da7c7572fe07c"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.066486 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lnh7z" podStartSLOduration=77.066467992 podStartE2EDuration="1m17.066467992s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.056273728 +0000 UTC m=+97.413787870" watchObservedRunningTime="2025-11-24 12:29:25.066467992 +0000 UTC m=+97.423982134" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.079812 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.081526 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.581500269 +0000 UTC m=+97.939014411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.090575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" event={"ID":"9feafd03-8082-4d57-9611-602776ad0db6","Type":"ContainerStarted","Data":"166c3f8115d1ebfac5be212529ce4dc2e273683de623092ab6506a837aeed61f"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.112281 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" event={"ID":"87c12acc-f25d-4842-a473-832d6b998769","Type":"ContainerStarted","Data":"f6f9080792652cd6eba734403c539a3084e5161186df7c180b2a62c4d289f1b5"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.112366 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" event={"ID":"87c12acc-f25d-4842-a473-832d6b998769","Type":"ContainerStarted","Data":"396a6576f15a9b8a1201deccd7c78fa267f030a1e415b4b5ebc280599bac1634"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.129402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" event={"ID":"00155282-9211-4df2-b258-105f5a0c8236","Type":"ContainerStarted","Data":"9119dd92b8075c4d2230befc8f0cefaf2fde2c6f96e5d87c3840700a83167c34"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.148016 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" podStartSLOduration=76.14799013 podStartE2EDuration="1m16.14799013s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.139048531 +0000 UTC m=+97.496562673" watchObservedRunningTime="2025-11-24 12:29:25.14799013 +0000 UTC m=+97.505504272" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.155619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" event={"ID":"86c61608-af41-4137-9fa6-d07c1e89f13d","Type":"ContainerStarted","Data":"00db7db01f670ffd08647885b11b1b5f7a19157cacff6ecea13b200ac4075935"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.156986 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.182083 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.183428 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" event={"ID":"21b90170-abc5-427d-92a4-d7f4240bb961","Type":"ContainerStarted","Data":"f9b62b542f7ee488fb1132f30e7b6ea0db804cf4f59e0d25a1f79bc7ac209ed7"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.183479 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" event={"ID":"21b90170-abc5-427d-92a4-d7f4240bb961","Type":"ContainerStarted","Data":"4ea94c20f732a290f478d6215df586bd400cd685983fcee18252c86bd37d8c58"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.183992 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.184078 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.68405947 +0000 UTC m=+98.041573812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.196513 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:25 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:25 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:25 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.196586 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.196681 4756 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6ks7x container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.196695 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" podUID="21b90170-abc5-427d-92a4-d7f4240bb961" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.208914 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.215857 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bmgrw" event={"ID":"1f52de20-d009-4ede-b66a-ba6f395cd99b","Type":"ContainerStarted","Data":"e7a5766f6cac6826ed03caa4dd83f4256db630b2e735678ab16179dda31e61ea"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.216062 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c698z" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.232717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" event={"ID":"6139f45c-2a2e-4561-af2e-2451b1f7bc15","Type":"ContainerStarted","Data":"ded4d22c11adf7754eb0d2bea53d2de40629fb769ec6657079d990f67eda8d2e"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.232796 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" event={"ID":"6139f45c-2a2e-4561-af2e-2451b1f7bc15","Type":"ContainerStarted","Data":"38511efe48ae63876ac2412c14f6c2cf480c37e8be856146e9d21595d8ad6522"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.242124 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" event={"ID":"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0","Type":"ContainerStarted","Data":"28583c771c56cfbd90621655d232a9210813fc3ef25ad1c72fad79d19cdb3f2c"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.242217 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" event={"ID":"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0","Type":"ContainerStarted","Data":"5b07b0d903ba4d7a7080a7b44b0bff61f75f8271a99c65c2a57bcc928de6db1a"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.256295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dlzvw" event={"ID":"8dbcb8da-17af-41eb-bfc0-c26853a43c34","Type":"ContainerStarted","Data":"fd32fdb44dbda8841ce8241d0c55c315a581e2127474bb7fe09a18457eba32ac"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.259781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" event={"ID":"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3","Type":"ContainerStarted","Data":"2947cc4919ff6ef6aed3d84697c14e0eefb7b69187d176c90d42e91a37ef6964"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.260328 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" event={"ID":"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3","Type":"ContainerStarted","Data":"80613993bc5858edf9d3ecfdf657c575b27ae840e58fcbfe95aaa95b025212b8"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.260350 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" event={"ID":"ce0dd05e-d269-4a9d-9d4b-3fb1349e79a3","Type":"ContainerStarted","Data":"b2d30976ee724cd962e027523b11fcded5d93b72715e04cf043775991330c180"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.269978 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" podStartSLOduration=76.269954489 podStartE2EDuration="1m16.269954489s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.261024951 +0000 UTC m=+97.618539093" watchObservedRunningTime="2025-11-24 12:29:25.269954489 +0000 UTC m=+97.627468631" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.278623 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" event={"ID":"37bf3224-33a8-45ab-93fc-05a44ed3f535","Type":"ContainerStarted","Data":"3018e02a4c446124925a5a1c13e37fb2c3849dffc4e7d4b053504d57aa7dcba4"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.278691 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" event={"ID":"37bf3224-33a8-45ab-93fc-05a44ed3f535","Type":"ContainerStarted","Data":"17c2d02fe0bcafca29cfcb1f538413dd40574b13cadb28efde507ced3f632638"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.294140 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.294398 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.794347193 +0000 UTC m=+98.151861335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.294863 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.298477 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.79845625 +0000 UTC m=+98.155970392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.328641 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8l7m" event={"ID":"4cf127cc-52b2-4121-8f1d-b2a628f95ea4","Type":"ContainerStarted","Data":"5667452f08c5e1f35d1f15f7bca82bf13a70f9f90b79d627f4a065768c6f4747"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.328699 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8l7m" event={"ID":"4cf127cc-52b2-4121-8f1d-b2a628f95ea4","Type":"ContainerStarted","Data":"b81d2b25cce089d5b289638e23f073402d0fb44009cbbb215ff7e28ce8718b7f"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.329059 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.358396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" event={"ID":"8c6400f9-d8a4-48da-986a-b9dd8bc96a82","Type":"ContainerStarted","Data":"fa5bd33e5085405f4c6ae9e19ab1d58d9d6444e25e33417ad01fc6b0a85bdb47"} Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.359519 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz6cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.359572 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz6cv" podUID="d587d404-97ce-49d5-92f9-360d94d6d061" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.370335 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.373343 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9kgtg" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.380355 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mlfk9" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.399637 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.400209 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.900185493 +0000 UTC m=+98.257699635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.410718 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.421742 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:25.921724877 +0000 UTC m=+98.279239019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.439046 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ssq2z" podStartSLOduration=77.439023391 podStartE2EDuration="1m17.439023391s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.395106376 +0000 UTC m=+97.752620518" watchObservedRunningTime="2025-11-24 12:29:25.439023391 +0000 UTC m=+97.796537533" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.478420 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6ql8x" podStartSLOduration=76.478402311 podStartE2EDuration="1m16.478402311s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.44183691 +0000 UTC m=+97.799351052" watchObservedRunningTime="2025-11-24 12:29:25.478402311 +0000 UTC m=+97.835916453" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.514573 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.515903 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.01588142 +0000 UTC m=+98.373395562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.524089 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vb2s9" podStartSLOduration=77.524064763 podStartE2EDuration="1m17.524064763s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.479492914 +0000 UTC m=+97.837007046" watchObservedRunningTime="2025-11-24 12:29:25.524064763 +0000 UTC m=+97.881578905" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.575043 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ch9ff" podStartSLOduration=76.575021706 podStartE2EDuration="1m16.575021706s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.526174697 +0000 UTC m=+97.883688839" watchObservedRunningTime="2025-11-24 12:29:25.575021706 +0000 UTC m=+97.932535838" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.609654 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" podStartSLOduration=76.609626965 podStartE2EDuration="1m16.609626965s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.576219941 +0000 UTC m=+97.933734113" watchObservedRunningTime="2025-11-24 12:29:25.609626965 +0000 UTC m=+97.967141107" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.617432 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.617910 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.11789603 +0000 UTC m=+98.475410172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.713657 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xz297" podStartSLOduration=77.713635797 podStartE2EDuration="1m17.713635797s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.660493437 +0000 UTC m=+98.018007579" watchObservedRunningTime="2025-11-24 12:29:25.713635797 +0000 UTC m=+98.071149939" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.715471 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" podStartSLOduration=76.715461915 podStartE2EDuration="1m16.715461915s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.706633189 +0000 UTC m=+98.064147331" watchObservedRunningTime="2025-11-24 12:29:25.715461915 +0000 UTC m=+98.072976057" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.718858 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.719401 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.219378368 +0000 UTC m=+98.576892510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.739926 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r5pv5" podStartSLOduration=76.73989933 podStartE2EDuration="1m16.73989933s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.739359909 +0000 UTC m=+98.096874071" watchObservedRunningTime="2025-11-24 12:29:25.73989933 +0000 UTC m=+98.097413472" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.798122 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-h8l7m" podStartSLOduration=8.798101686 podStartE2EDuration="8.798101686s" podCreationTimestamp="2025-11-24 12:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:25.795353008 +0000 UTC m=+98.152867160" watchObservedRunningTime="2025-11-24 12:29:25.798101686 +0000 UTC m=+98.155615828" Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.820900 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.821395 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.321374516 +0000 UTC m=+98.678888658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.922109 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.922436 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.422384214 +0000 UTC m=+98.779898356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:25 crc kubenswrapper[4756]: I1124 12:29:25.922774 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:25 crc kubenswrapper[4756]: E1124 12:29:25.923187 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.423169291 +0000 UTC m=+98.780683443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.024094 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.024349 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.524312822 +0000 UTC m=+98.881826964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.024514 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.024912 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.524902054 +0000 UTC m=+98.882416196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.032716 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-v5ndr" podStartSLOduration=77.032695229 podStartE2EDuration="1m17.032695229s" podCreationTimestamp="2025-11-24 12:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:26.002487022 +0000 UTC m=+98.360001164" watchObservedRunningTime="2025-11-24 12:29:26.032695229 +0000 UTC m=+98.390209371" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.125912 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.126181 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.626121477 +0000 UTC m=+98.983635629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.126358 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.126842 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.626833152 +0000 UTC m=+98.984347294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.175032 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:26 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:26 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:26 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.175134 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.228294 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.228518 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.728485494 +0000 UTC m=+99.085999636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.228587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.228965 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.728951333 +0000 UTC m=+99.086465475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.330715 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.330986 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.830926862 +0000 UTC m=+99.188441004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.331388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.331773 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.831756089 +0000 UTC m=+99.189270231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.420591 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" event={"ID":"5a60e484-5344-420c-8f60-aea62504ed10","Type":"ContainerStarted","Data":"4450585bf7327a051699642384b0a495e29cd18423ed36656d76b0341402bbde"} Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.434182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.434573 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:26.934553745 +0000 UTC m=+99.292067887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.437898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjsts" event={"ID":"bcb119b7-a5d6-411a-8b0e-78d09d23c6b0","Type":"ContainerStarted","Data":"bb01d47e8d994d7477750297d965640915bafe669f13e126bd15c2fbd26d2dd1"} Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.466510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8l7m" event={"ID":"4cf127cc-52b2-4121-8f1d-b2a628f95ea4","Type":"ContainerStarted","Data":"0f9fbeab94367635f94bc3a071ced3f7e4ec4004ab9b684b92415595b0136242"} Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.518193 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" event={"ID":"8c6400f9-d8a4-48da-986a-b9dd8bc96a82","Type":"ContainerStarted","Data":"c3b17940ce0e7983d24b86d336db076ae23fdee1cae33994cba6b71a36e0b52d"} Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.520435 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xbhls container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.520487 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.537228 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.544803 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.044782917 +0000 UTC m=+99.402297049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.637832 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.639497 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.139475962 +0000 UTC m=+99.496990104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.649413 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" podStartSLOduration=78.649381571 podStartE2EDuration="1m18.649381571s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:26.628944941 +0000 UTC m=+98.986459083" watchObservedRunningTime="2025-11-24 12:29:26.649381571 +0000 UTC m=+99.006895723" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.650832 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.676460 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.676735 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.679190 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.686352 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6ks7x" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.741708 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.741778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.741834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.741856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8pg\" (UniqueName: \"kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.742259 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.242243628 +0000 UTC m=+99.599757770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.821028 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.822110 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.830726 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.842924 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843266 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843311 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gclf\" (UniqueName: \"kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843382 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843413 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843478 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn8pg\" (UniqueName: \"kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.843500 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.843668 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.343651534 +0000 UTC m=+99.701165676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.844251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.844552 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.848962 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.912512 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn8pg\" (UniqueName: \"kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg\") pod \"community-operators-bdhwb\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.945745 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.945848 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.945908 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.946002 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gclf\" (UniqueName: \"kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: E1124 12:29:26.946515 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.44649149 +0000 UTC m=+99.804005632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.946619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.947791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:26 crc kubenswrapper[4756]: I1124 12:29:26.973473 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gclf\" (UniqueName: \"kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf\") pod \"certified-operators-nb9nl\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.005224 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.015219 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.016897 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.045402 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.055437 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.055809 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.555789002 +0000 UTC m=+99.913303144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.061245 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.061501 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.061558 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.061706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswgx\" (UniqueName: \"kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.062384 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.562359811 +0000 UTC m=+99.919873943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.149503 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.168921 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.169257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswgx\" (UniqueName: \"kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.169312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.169367 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.170464 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.170545 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.67052393 +0000 UTC m=+100.028038072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.171080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.205824 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswgx\" (UniqueName: \"kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx\") pod \"community-operators-5cxgm\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.206369 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:27 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:27 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:27 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.208198 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.221877 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.223136 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.235988 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.271110 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.271210 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.271350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.271404 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.271891 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.771875365 +0000 UTC m=+100.129389507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.358541 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.372404 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.372646 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.372730 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.372772 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.373820 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.373847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.373961 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.873934725 +0000 UTC m=+100.231449047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.396651 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2\") pod \"certified-operators-6smp9\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.405242 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:29:27 crc kubenswrapper[4756]: W1124 12:29:27.415011 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3307bf9_529c_4d16_8a9c_87b46c381ca6.slice/crio-43f9f97d2c9172e8b325b20185c63bd3be7ec7ba219b5f6669476e69ff91e97c WatchSource:0}: Error finding container 43f9f97d2c9172e8b325b20185c63bd3be7ec7ba219b5f6669476e69ff91e97c: Status 404 returned error can't find the container with id 43f9f97d2c9172e8b325b20185c63bd3be7ec7ba219b5f6669476e69ff91e97c Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.476480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.477092 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:27.977067558 +0000 UTC m=+100.334581700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.564993 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerStarted","Data":"43f9f97d2c9172e8b325b20185c63bd3be7ec7ba219b5f6669476e69ff91e97c"} Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.570051 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" event={"ID":"5a60e484-5344-420c-8f60-aea62504ed10","Type":"ContainerStarted","Data":"f97d9f9e185718baeeb8957e60a62f17ea2c42f1ed2228179a79ddc511365024"} Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.570338 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" event={"ID":"5a60e484-5344-420c-8f60-aea62504ed10","Type":"ContainerStarted","Data":"f19488a6829750fb5cacc39533ffb7de4fecfb1fbfb82f01dd8f0bb5e089fd4c"} Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.570416 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" event={"ID":"5a60e484-5344-420c-8f60-aea62504ed10","Type":"ContainerStarted","Data":"7be49b90d4d9741d02cca00862bdb4ca5c971a41ffb383f421750bdd3b2683a6"} Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.575790 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.577670 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.577994 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.077966614 +0000 UTC m=+100.435480756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.578420 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.578704 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.585265 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.585305 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.085278468 +0000 UTC m=+100.442792610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.594237 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8xkjk" podStartSLOduration=10.594207676 podStartE2EDuration="10.594207676s" podCreationTimestamp="2025-11-24 12:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:27.592400998 +0000 UTC m=+99.949915140" watchObservedRunningTime="2025-11-24 12:29:27.594207676 +0000 UTC m=+99.951721828" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.625554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6662f3ec-8806-4797-a7a5-f1606c4a54cf-metrics-certs\") pod \"network-metrics-daemon-r955c\" (UID: \"6662f3ec-8806-4797-a7a5-f1606c4a54cf\") " pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.682776 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.683059 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.183026667 +0000 UTC m=+100.540540809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.683787 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.690825 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.190802211 +0000 UTC m=+100.548316533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.739673 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:29:27 crc kubenswrapper[4756]: W1124 12:29:27.755014 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed239e1_dc53_4ea6_89a4_4cff0bf2b0c4.slice/crio-be12c491411871c440f977ad17982ef750568f47e33f93d162521ccbe65c27ba WatchSource:0}: Error finding container be12c491411871c440f977ad17982ef750568f47e33f93d162521ccbe65c27ba: Status 404 returned error can't find the container with id be12c491411871c440f977ad17982ef750568f47e33f93d162521ccbe65c27ba Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.767809 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.784869 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.785008 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.284984135 +0000 UTC m=+100.642498287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.785241 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.785881 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.285869004 +0000 UTC m=+100.643383146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.805615 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r955c" Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.886470 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.886996 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.386962554 +0000 UTC m=+100.744476736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.960597 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:29:27 crc kubenswrapper[4756]: I1124 12:29:27.987963 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:27 crc kubenswrapper[4756]: E1124 12:29:27.988730 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.488710157 +0000 UTC m=+100.846224299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.089939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.090056 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.590032072 +0000 UTC m=+100.947546224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.090321 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.090730 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.590720536 +0000 UTC m=+100.948234678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: W1124 12:29:28.098491 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc70eb17_e5c5_4cfb_8e13_fae2fe6c2f76.slice/crio-c3f57c016d3fc9a2bdb3232695105314b2e6df0af836ee462e0b8bd461e12dee WatchSource:0}: Error finding container c3f57c016d3fc9a2bdb3232695105314b2e6df0af836ee462e0b8bd461e12dee: Status 404 returned error can't find the container with id c3f57c016d3fc9a2bdb3232695105314b2e6df0af836ee462e0b8bd461e12dee Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.099731 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r955c"] Nov 24 12:29:28 crc kubenswrapper[4756]: W1124 12:29:28.106915 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6662f3ec_8806_4797_a7a5_f1606c4a54cf.slice/crio-c767b1babec9484c70765696a76abec27a69e923bd529cec1f610479caecdab0 WatchSource:0}: Error finding container c767b1babec9484c70765696a76abec27a69e923bd529cec1f610479caecdab0: Status 404 returned error can't find the container with id c767b1babec9484c70765696a76abec27a69e923bd529cec1f610479caecdab0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.173341 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:28 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:28 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:28 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.173405 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.191856 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.192234 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.692180614 +0000 UTC m=+101.049694756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.192431 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.192818 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.692802087 +0000 UTC m=+101.050316229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.236450 4756 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.294032 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.294256 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.794225164 +0000 UTC m=+101.151739306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.294487 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.294935 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.794924939 +0000 UTC m=+101.152439081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.395074 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.395476 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.895398895 +0000 UTC m=+101.252913057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.395604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.395984 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.895969478 +0000 UTC m=+101.253483620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.497704 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.498095 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.998060618 +0000 UTC m=+101.355574820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.498404 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.499018 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:28.999007598 +0000 UTC m=+101.356521740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.590583 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r955c" event={"ID":"6662f3ec-8806-4797-a7a5-f1606c4a54cf","Type":"ContainerStarted","Data":"8d1863cf624fd485cc4b3d56eed05a41b32fd4e781d925eaad88bc64bd65b3dd"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.590642 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r955c" event={"ID":"6662f3ec-8806-4797-a7a5-f1606c4a54cf","Type":"ContainerStarted","Data":"c767b1babec9484c70765696a76abec27a69e923bd529cec1f610479caecdab0"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.599959 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.600180 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 12:29:29.100130799 +0000 UTC m=+101.457644941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.600330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: E1124 12:29:28.600940 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 12:29:29.100918875 +0000 UTC m=+101.458433057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p5f6n" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.603147 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca136cca-642a-419a-be24-241bbb527020" containerID="23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86" exitCode=0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.603254 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerDied","Data":"23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.603286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerStarted","Data":"e132d63aee52014cb5b3e3234ed69c555f0e43c1ba0757c4cef808145e210228"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.603836 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.605057 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.608576 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.609070 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.620499 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerID="fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44" exitCode=0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.621218 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerDied","Data":"fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.621267 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerStarted","Data":"c3f57c016d3fc9a2bdb3232695105314b2e6df0af836ee462e0b8bd461e12dee"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.629763 4756 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T12:29:28.236519248Z","Handler":null,"Name":""} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.633836 4756 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.633889 4756 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.634553 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.636520 4756 generic.go:334] "Generic (PLEG): container finished" podID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerID="b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175" exitCode=0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.636610 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerDied","Data":"b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.636644 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerStarted","Data":"be12c491411871c440f977ad17982ef750568f47e33f93d162521ccbe65c27ba"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.652964 4756 generic.go:334] "Generic (PLEG): container finished" podID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerID="e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52" exitCode=0 Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.653063 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerDied","Data":"e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52"} Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.707625 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.707946 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.708003 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.708226 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkn8b\" (UniqueName: \"kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.721193 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.809577 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.809650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.809769 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.809813 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkn8b\" (UniqueName: \"kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.810295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.810371 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.834117 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.834206 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.853918 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkn8b\" (UniqueName: \"kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b\") pod \"redhat-marketplace-8d29p\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.885245 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p5f6n\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:28 crc kubenswrapper[4756]: I1124 12:29:28.930415 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.008075 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.010413 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.042127 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.089138 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.115867 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.115957 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wv2\" (UniqueName: \"kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.116244 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.218150 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.218262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.218332 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wv2\" (UniqueName: \"kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.219410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.220008 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.227046 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:29 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:29 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:29 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.227128 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.256323 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wv2\" (UniqueName: \"kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2\") pod \"redhat-marketplace-7jfcl\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.373903 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.413008 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.413821 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.414240 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.450274 4756 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tsnkp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]log ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]etcd ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/max-in-flight-filter ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 12:29:29 crc kubenswrapper[4756]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-startinformers ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 12:29:29 crc kubenswrapper[4756]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 12:29:29 crc kubenswrapper[4756]: livez check failed Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.450744 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" podUID="8c6400f9-d8a4-48da-986a-b9dd8bc96a82" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.452548 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.462326 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz6cv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.462386 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hz6cv" podUID="d587d404-97ce-49d5-92f9-360d94d6d061" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.463912 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-hz6cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.463948 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hz6cv" podUID="d587d404-97ce-49d5-92f9-360d94d6d061" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.694741 4756 generic.go:334] "Generic (PLEG): container finished" podID="036136f7-02ff-449a-9367-0cf354821811" containerID="cbed0566e43b91329f5ea5dae931dc6ad1d7daec6e9c5c6fc1d0251cc43ab9b2" exitCode=0 Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.695781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" event={"ID":"036136f7-02ff-449a-9367-0cf354821811","Type":"ContainerDied","Data":"cbed0566e43b91329f5ea5dae931dc6ad1d7daec6e9c5c6fc1d0251cc43ab9b2"} Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.701553 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r955c" event={"ID":"6662f3ec-8806-4797-a7a5-f1606c4a54cf","Type":"ContainerStarted","Data":"bd5e9ab8d7dcd7e7c54ddd941e657e77d21d06ce1cb4520c69adb1b42569de56"} Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.707560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerStarted","Data":"4f7907c151c8e22eecb8a99c9651f992127903b46ad5c79496f87e86889d2244"} Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.740077 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.780725 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.783731 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-r955c" podStartSLOduration=81.783696345 podStartE2EDuration="1m21.783696345s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:29.764067291 +0000 UTC m=+102.121581433" watchObservedRunningTime="2025-11-24 12:29:29.783696345 +0000 UTC m=+102.141210487" Nov 24 12:29:29 crc kubenswrapper[4756]: W1124 12:29:29.793318 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4943ec6_a5a3_4e97_9073_cf59209bfbf3.slice/crio-97bfa5ca47de23a90df84d0596eebf60d94356a5b183d32888a74972af3b21a1 WatchSource:0}: Error finding container 97bfa5ca47de23a90df84d0596eebf60d94356a5b183d32888a74972af3b21a1: Status 404 returned error can't find the container with id 97bfa5ca47de23a90df84d0596eebf60d94356a5b183d32888a74972af3b21a1 Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.813520 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.814697 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.816911 4756 patch_prober.go:28] interesting pod/console-f9d7485db-srchr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.816965 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-srchr" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.880091 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.881285 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.884242 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.884687 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.895421 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.941174 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:29 crc kubenswrapper[4756]: I1124 12:29:29.941241 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.006536 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.008207 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.014369 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.016457 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042278 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wjj\" (UniqueName: \"kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042344 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042396 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042423 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.042503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.075564 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.143990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.144054 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7wjj\" (UniqueName: \"kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.144112 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.144804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.145434 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.165500 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7wjj\" (UniqueName: \"kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj\") pod \"redhat-operators-29s2z\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.167648 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.175617 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:30 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:30 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:30 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.175700 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.213096 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.379857 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.406095 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.408406 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.416417 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.448961 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.449016 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.449191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjq9\" (UniqueName: \"kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.501808 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.551257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.551316 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.551349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjq9\" (UniqueName: \"kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.552361 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.552468 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.576278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjq9\" (UniqueName: \"kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9\") pod \"redhat-operators-kqzr4\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.724006 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.741047 4756 generic.go:334] "Generic (PLEG): container finished" podID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerID="2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1" exitCode=0 Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.741118 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerDied","Data":"2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1"} Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.741196 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerStarted","Data":"67e8cb3f30a50d3cc9b4c07c5031ff31fc9bf534af18ed1d9d2548918888f17f"} Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.749050 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.770957 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" event={"ID":"c4943ec6-a5a3-4e97-9073-cf59209bfbf3","Type":"ContainerStarted","Data":"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0"} Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.771041 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" event={"ID":"c4943ec6-a5a3-4e97-9073-cf59209bfbf3","Type":"ContainerStarted","Data":"97bfa5ca47de23a90df84d0596eebf60d94356a5b183d32888a74972af3b21a1"} Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.771141 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.775103 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerID="3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef" exitCode=0 Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.775285 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerDied","Data":"3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef"} Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.791137 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" podStartSLOduration=82.791116698 podStartE2EDuration="1m22.791116698s" podCreationTimestamp="2025-11-24 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:30.789179417 +0000 UTC m=+103.146693559" watchObservedRunningTime="2025-11-24 12:29:30.791116698 +0000 UTC m=+103.148630840" Nov 24 12:29:30 crc kubenswrapper[4756]: I1124 12:29:30.842422 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:29:30 crc kubenswrapper[4756]: W1124 12:29:30.900791 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod609d4ed3_7c39_4c20_9c1a_1b536ef07a7e.slice/crio-aca1711cffe68954ab372653a32fe8cbbc0fd224d35465b0a115c5c92e7fb2c4 WatchSource:0}: Error finding container aca1711cffe68954ab372653a32fe8cbbc0fd224d35465b0a115c5c92e7fb2c4: Status 404 returned error can't find the container with id aca1711cffe68954ab372653a32fe8cbbc0fd224d35465b0a115c5c92e7fb2c4 Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.077753 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.172051 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:31 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:31 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:31 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.172172 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.188243 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.267045 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume\") pod \"036136f7-02ff-449a-9367-0cf354821811\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.267303 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9js\" (UniqueName: \"kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js\") pod \"036136f7-02ff-449a-9367-0cf354821811\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.267330 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume\") pod \"036136f7-02ff-449a-9367-0cf354821811\" (UID: \"036136f7-02ff-449a-9367-0cf354821811\") " Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.269679 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume" (OuterVolumeSpecName: "config-volume") pod "036136f7-02ff-449a-9367-0cf354821811" (UID: "036136f7-02ff-449a-9367-0cf354821811"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.275524 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "036136f7-02ff-449a-9367-0cf354821811" (UID: "036136f7-02ff-449a-9367-0cf354821811"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.276745 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js" (OuterVolumeSpecName: "kube-api-access-lx9js") pod "036136f7-02ff-449a-9367-0cf354821811" (UID: "036136f7-02ff-449a-9367-0cf354821811"). InnerVolumeSpecName "kube-api-access-lx9js". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.369570 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/036136f7-02ff-449a-9367-0cf354821811-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.369611 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9js\" (UniqueName: \"kubernetes.io/projected/036136f7-02ff-449a-9367-0cf354821811-kube-api-access-lx9js\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.369621 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/036136f7-02ff-449a-9367-0cf354821811-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.791210 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerID="909445e361a18957635b31c6fae8b12ebb2f773a598c9a194a53b68ae6dd8ccd" exitCode=0 Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.791357 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerDied","Data":"909445e361a18957635b31c6fae8b12ebb2f773a598c9a194a53b68ae6dd8ccd"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.791770 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerStarted","Data":"8e793f5ee13d334e89871846f2db702fc91dda9bed86d9aae93e5b71732c4311"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.795691 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" event={"ID":"036136f7-02ff-449a-9367-0cf354821811","Type":"ContainerDied","Data":"b17eea37517024d3888d59ad154c789f70fcf15a82820143d9bc0ec3d6f42444"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.795761 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b17eea37517024d3888d59ad154c789f70fcf15a82820143d9bc0ec3d6f42444" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.795826 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr" Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.801808 4756 generic.go:334] "Generic (PLEG): container finished" podID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerID="d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf" exitCode=0 Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.801899 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerDied","Data":"d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.801940 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerStarted","Data":"aca1711cffe68954ab372653a32fe8cbbc0fd224d35465b0a115c5c92e7fb2c4"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.807606 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"13e611a6-3e86-4c7c-b101-22a053149bf4","Type":"ContainerStarted","Data":"64901f5679ab81076fa7a663404cdd075e7a363029c11018538e0b96f986a811"} Nov 24 12:29:31 crc kubenswrapper[4756]: I1124 12:29:31.807751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"13e611a6-3e86-4c7c-b101-22a053149bf4","Type":"ContainerStarted","Data":"044aea9ce29763659745f9dd67a178140fe192b80fa7e3f8231685191dda0f64"} Nov 24 12:29:32 crc kubenswrapper[4756]: I1124 12:29:32.170511 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:32 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:32 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:32 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:32 crc kubenswrapper[4756]: I1124 12:29:32.170582 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:32 crc kubenswrapper[4756]: I1124 12:29:32.827130 4756 generic.go:334] "Generic (PLEG): container finished" podID="13e611a6-3e86-4c7c-b101-22a053149bf4" containerID="64901f5679ab81076fa7a663404cdd075e7a363029c11018538e0b96f986a811" exitCode=0 Nov 24 12:29:32 crc kubenswrapper[4756]: I1124 12:29:32.827187 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"13e611a6-3e86-4c7c-b101-22a053149bf4","Type":"ContainerDied","Data":"64901f5679ab81076fa7a663404cdd075e7a363029c11018538e0b96f986a811"} Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.169762 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:33 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:33 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:33 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.169840 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.864592 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:29:33 crc kubenswrapper[4756]: E1124 12:29:33.865370 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036136f7-02ff-449a-9367-0cf354821811" containerName="collect-profiles" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.865385 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="036136f7-02ff-449a-9367-0cf354821811" containerName="collect-profiles" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.865517 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="036136f7-02ff-449a-9367-0cf354821811" containerName="collect-profiles" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.866298 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.866379 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.868972 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.869315 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.916692 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:33 crc kubenswrapper[4756]: I1124 12:29:33.916777 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.018445 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.018512 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.018617 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.043246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.174895 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:34 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:34 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:34 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.177914 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.188638 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.418549 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:34 crc kubenswrapper[4756]: I1124 12:29:34.425625 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tsnkp" Nov 24 12:29:35 crc kubenswrapper[4756]: I1124 12:29:35.171260 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:35 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:35 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:35 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:35 crc kubenswrapper[4756]: I1124 12:29:35.171710 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:35 crc kubenswrapper[4756]: I1124 12:29:35.278356 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h8l7m" Nov 24 12:29:36 crc kubenswrapper[4756]: I1124 12:29:36.170653 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:36 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:36 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:36 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:36 crc kubenswrapper[4756]: I1124 12:29:36.170757 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:37 crc kubenswrapper[4756]: I1124 12:29:37.171535 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:37 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:37 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:37 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:37 crc kubenswrapper[4756]: I1124 12:29:37.171627 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:38 crc kubenswrapper[4756]: I1124 12:29:38.169773 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:38 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:38 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:38 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:38 crc kubenswrapper[4756]: I1124 12:29:38.170124 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:39 crc kubenswrapper[4756]: I1124 12:29:39.177339 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:39 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Nov 24 12:29:39 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:39 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:39 crc kubenswrapper[4756]: I1124 12:29:39.177421 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:39 crc kubenswrapper[4756]: I1124 12:29:39.467606 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hz6cv" Nov 24 12:29:39 crc kubenswrapper[4756]: I1124 12:29:39.812781 4756 patch_prober.go:28] interesting pod/console-f9d7485db-srchr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 24 12:29:39 crc kubenswrapper[4756]: I1124 12:29:39.813426 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-srchr" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 24 12:29:40 crc kubenswrapper[4756]: I1124 12:29:40.171834 4756 patch_prober.go:28] interesting pod/router-default-5444994796-mfl9q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 12:29:40 crc kubenswrapper[4756]: [+]has-synced ok Nov 24 12:29:40 crc kubenswrapper[4756]: [+]process-running ok Nov 24 12:29:40 crc kubenswrapper[4756]: healthz check failed Nov 24 12:29:40 crc kubenswrapper[4756]: I1124 12:29:40.172490 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mfl9q" podUID="3a420a5d-a184-43dd-a25c-80c97502ce62" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 12:29:41 crc kubenswrapper[4756]: I1124 12:29:41.171416 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:41 crc kubenswrapper[4756]: I1124 12:29:41.174122 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mfl9q" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.023824 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.126338 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access\") pod \"13e611a6-3e86-4c7c-b101-22a053149bf4\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.126448 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir\") pod \"13e611a6-3e86-4c7c-b101-22a053149bf4\" (UID: \"13e611a6-3e86-4c7c-b101-22a053149bf4\") " Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.126614 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "13e611a6-3e86-4c7c-b101-22a053149bf4" (UID: "13e611a6-3e86-4c7c-b101-22a053149bf4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.126833 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13e611a6-3e86-4c7c-b101-22a053149bf4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.135409 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "13e611a6-3e86-4c7c-b101-22a053149bf4" (UID: "13e611a6-3e86-4c7c-b101-22a053149bf4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.229001 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13e611a6-3e86-4c7c-b101-22a053149bf4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.924046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"13e611a6-3e86-4c7c-b101-22a053149bf4","Type":"ContainerDied","Data":"044aea9ce29763659745f9dd67a178140fe192b80fa7e3f8231685191dda0f64"} Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.924532 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="044aea9ce29763659745f9dd67a178140fe192b80fa7e3f8231685191dda0f64" Nov 24 12:29:46 crc kubenswrapper[4756]: I1124 12:29:46.924195 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 12:29:49 crc kubenswrapper[4756]: I1124 12:29:49.096041 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:29:49 crc kubenswrapper[4756]: I1124 12:29:49.997701 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:50 crc kubenswrapper[4756]: I1124 12:29:50.003203 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:29:52 crc kubenswrapper[4756]: E1124 12:29:52.176556 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 12:29:52 crc kubenswrapper[4756]: E1124 12:29:52.177047 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cswgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5cxgm_openshift-marketplace(ca136cca-642a-419a-be24-241bbb527020): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 12:29:52 crc kubenswrapper[4756]: E1124 12:29:52.179683 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5cxgm" podUID="ca136cca-642a-419a-be24-241bbb527020" Nov 24 12:29:52 crc kubenswrapper[4756]: I1124 12:29:52.499398 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 12:29:55 crc kubenswrapper[4756]: E1124 12:29:55.275275 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5cxgm" podUID="ca136cca-642a-419a-be24-241bbb527020" Nov 24 12:29:55 crc kubenswrapper[4756]: I1124 12:29:55.979316 4756 generic.go:334] "Generic (PLEG): container finished" podID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerID="a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9" exitCode=0 Nov 24 12:29:55 crc kubenswrapper[4756]: I1124 12:29:55.979478 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerDied","Data":"a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9"} Nov 24 12:29:55 crc kubenswrapper[4756]: I1124 12:29:55.985674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerStarted","Data":"1231df94d81a5d76c9e2c5878b874cf1e72b64d8cf803cb71ab20beac2e54d9c"} Nov 24 12:29:55 crc kubenswrapper[4756]: I1124 12:29:55.993392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerStarted","Data":"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.007515 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerID="fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890" exitCode=0 Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.008225 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerDied","Data":"fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.013344 4756 generic.go:334] "Generic (PLEG): container finished" podID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerID="14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e" exitCode=0 Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.013430 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerDied","Data":"14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.016687 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77d41ff7-b5eb-4b62-b11d-2b5984c33e10","Type":"ContainerStarted","Data":"5dc6e886c51037588bcfd651af9e21cde175e5b85f2dcfa02048aa23a6dc8664"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.016734 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77d41ff7-b5eb-4b62-b11d-2b5984c33e10","Type":"ContainerStarted","Data":"c585677b2ab4ccd0e0f39924ad8851a0394368f2f846274651239d2bd46ab7bd"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.019687 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerStarted","Data":"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.030013 4756 generic.go:334] "Generic (PLEG): container finished" podID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerID="d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99" exitCode=0 Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.030056 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerDied","Data":"d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99"} Nov 24 12:29:56 crc kubenswrapper[4756]: I1124 12:29:56.185204 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=23.18518282 podStartE2EDuration="23.18518282s" podCreationTimestamp="2025-11-24 12:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:29:56.141275265 +0000 UTC m=+128.498789407" watchObservedRunningTime="2025-11-24 12:29:56.18518282 +0000 UTC m=+128.542696962" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.039853 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerStarted","Data":"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.046858 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerID="1231df94d81a5d76c9e2c5878b874cf1e72b64d8cf803cb71ab20beac2e54d9c" exitCode=0 Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.046960 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerDied","Data":"1231df94d81a5d76c9e2c5878b874cf1e72b64d8cf803cb71ab20beac2e54d9c"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.049109 4756 generic.go:334] "Generic (PLEG): container finished" podID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerID="9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea" exitCode=0 Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.049222 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerDied","Data":"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.062325 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerStarted","Data":"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.071042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerStarted","Data":"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.072429 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bdhwb" podStartSLOduration=3.085592407 podStartE2EDuration="31.072416432s" podCreationTimestamp="2025-11-24 12:29:26 +0000 UTC" firstStartedPulling="2025-11-24 12:29:28.659649563 +0000 UTC m=+101.017163715" lastFinishedPulling="2025-11-24 12:29:56.646473598 +0000 UTC m=+129.003987740" observedRunningTime="2025-11-24 12:29:57.069367528 +0000 UTC m=+129.426881690" watchObservedRunningTime="2025-11-24 12:29:57.072416432 +0000 UTC m=+129.429930574" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.073730 4756 generic.go:334] "Generic (PLEG): container finished" podID="77d41ff7-b5eb-4b62-b11d-2b5984c33e10" containerID="5dc6e886c51037588bcfd651af9e21cde175e5b85f2dcfa02048aa23a6dc8664" exitCode=0 Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.073863 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77d41ff7-b5eb-4b62-b11d-2b5984c33e10","Type":"ContainerDied","Data":"5dc6e886c51037588bcfd651af9e21cde175e5b85f2dcfa02048aa23a6dc8664"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.076947 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerID="ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb" exitCode=0 Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.077007 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerDied","Data":"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.099980 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerStarted","Data":"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505"} Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.107814 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8d29p" podStartSLOduration=3.375135343 podStartE2EDuration="29.107792838s" podCreationTimestamp="2025-11-24 12:29:28 +0000 UTC" firstStartedPulling="2025-11-24 12:29:30.795279096 +0000 UTC m=+103.152793238" lastFinishedPulling="2025-11-24 12:29:56.527936601 +0000 UTC m=+128.885450733" observedRunningTime="2025-11-24 12:29:57.098447271 +0000 UTC m=+129.455961423" watchObservedRunningTime="2025-11-24 12:29:57.107792838 +0000 UTC m=+129.465306980" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.151275 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.151328 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.162289 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nb9nl" podStartSLOduration=3.330145539 podStartE2EDuration="31.162267245s" podCreationTimestamp="2025-11-24 12:29:26 +0000 UTC" firstStartedPulling="2025-11-24 12:29:28.640373767 +0000 UTC m=+100.997887909" lastFinishedPulling="2025-11-24 12:29:56.472495483 +0000 UTC m=+128.830009615" observedRunningTime="2025-11-24 12:29:57.161839276 +0000 UTC m=+129.519353418" watchObservedRunningTime="2025-11-24 12:29:57.162267245 +0000 UTC m=+129.519781387" Nov 24 12:29:57 crc kubenswrapper[4756]: I1124 12:29:57.181368 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jfcl" podStartSLOduration=3.231511266 podStartE2EDuration="29.181352497s" podCreationTimestamp="2025-11-24 12:29:28 +0000 UTC" firstStartedPulling="2025-11-24 12:29:30.795128073 +0000 UTC m=+103.152642215" lastFinishedPulling="2025-11-24 12:29:56.744969304 +0000 UTC m=+129.102483446" observedRunningTime="2025-11-24 12:29:57.18054909 +0000 UTC m=+129.538063252" watchObservedRunningTime="2025-11-24 12:29:57.181352497 +0000 UTC m=+129.538866639" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.110304 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerStarted","Data":"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5"} Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.123100 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerStarted","Data":"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3"} Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.126548 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerStarted","Data":"be3a7a83fdf60c1a087362d94eb29172aca7d791f9137792a9642d525ab4424f"} Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.163095 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-29s2z" podStartSLOduration=3.397934231 podStartE2EDuration="29.163074671s" podCreationTimestamp="2025-11-24 12:29:29 +0000 UTC" firstStartedPulling="2025-11-24 12:29:31.804400986 +0000 UTC m=+104.161915128" lastFinishedPulling="2025-11-24 12:29:57.569541416 +0000 UTC m=+129.927055568" observedRunningTime="2025-11-24 12:29:58.139647877 +0000 UTC m=+130.497162019" watchObservedRunningTime="2025-11-24 12:29:58.163074671 +0000 UTC m=+130.520588813" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.165312 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6smp9" podStartSLOduration=2.204304028 podStartE2EDuration="31.165302578s" podCreationTimestamp="2025-11-24 12:29:27 +0000 UTC" firstStartedPulling="2025-11-24 12:29:28.622436909 +0000 UTC m=+100.979951061" lastFinishedPulling="2025-11-24 12:29:57.583435469 +0000 UTC m=+129.940949611" observedRunningTime="2025-11-24 12:29:58.161203621 +0000 UTC m=+130.518717773" watchObservedRunningTime="2025-11-24 12:29:58.165302578 +0000 UTC m=+130.522816720" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.369885 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nb9nl" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="registry-server" probeResult="failure" output=< Nov 24 12:29:58 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:29:58 crc kubenswrapper[4756]: > Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.539352 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.564083 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqzr4" podStartSLOduration=3.908361713 podStartE2EDuration="28.564060659s" podCreationTimestamp="2025-11-24 12:29:30 +0000 UTC" firstStartedPulling="2025-11-24 12:29:32.837548683 +0000 UTC m=+105.195062825" lastFinishedPulling="2025-11-24 12:29:57.493247629 +0000 UTC m=+129.850761771" observedRunningTime="2025-11-24 12:29:58.183363898 +0000 UTC m=+130.540878050" watchObservedRunningTime="2025-11-24 12:29:58.564060659 +0000 UTC m=+130.921574801" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.647490 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir\") pod \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.648067 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access\") pod \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\" (UID: \"77d41ff7-b5eb-4b62-b11d-2b5984c33e10\") " Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.647631 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "77d41ff7-b5eb-4b62-b11d-2b5984c33e10" (UID: "77d41ff7-b5eb-4b62-b11d-2b5984c33e10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.648800 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.664141 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77d41ff7-b5eb-4b62-b11d-2b5984c33e10" (UID: "77d41ff7-b5eb-4b62-b11d-2b5984c33e10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.752308 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d41ff7-b5eb-4b62-b11d-2b5984c33e10-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.930987 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.931047 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:58 crc kubenswrapper[4756]: I1124 12:29:58.997320 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.134280 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.134388 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77d41ff7-b5eb-4b62-b11d-2b5984c33e10","Type":"ContainerDied","Data":"c585677b2ab4ccd0e0f39924ad8851a0394368f2f846274651239d2bd46ab7bd"} Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.134436 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c585677b2ab4ccd0e0f39924ad8851a0394368f2f846274651239d2bd46ab7bd" Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.415102 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.415513 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:29:59 crc kubenswrapper[4756]: I1124 12:29:59.466496 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.134929 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr"] Nov 24 12:30:00 crc kubenswrapper[4756]: E1124 12:30:00.135774 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e611a6-3e86-4c7c-b101-22a053149bf4" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.135794 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e611a6-3e86-4c7c-b101-22a053149bf4" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: E1124 12:30:00.135808 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d41ff7-b5eb-4b62-b11d-2b5984c33e10" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.135815 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d41ff7-b5eb-4b62-b11d-2b5984c33e10" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.135941 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e611a6-3e86-4c7c-b101-22a053149bf4" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.135961 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d41ff7-b5eb-4b62-b11d-2b5984c33e10" containerName="pruner" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.136570 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.139194 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.140035 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.148689 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr"] Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.171518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgrg\" (UniqueName: \"kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.171585 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.171612 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.209425 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bnnhq" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.273010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmgrg\" (UniqueName: \"kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.273080 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.273134 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.274142 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.279278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.296914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmgrg\" (UniqueName: \"kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg\") pod \"collect-profiles-29399790-5rwrr\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.380025 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.380545 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.455526 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.725691 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.726279 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:00 crc kubenswrapper[4756]: I1124 12:30:00.901610 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr"] Nov 24 12:30:00 crc kubenswrapper[4756]: W1124 12:30:00.910816 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfeb738d_4835_45a7_90a2_440d45459f4d.slice/crio-0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28 WatchSource:0}: Error finding container 0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28: Status 404 returned error can't find the container with id 0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28 Nov 24 12:30:01 crc kubenswrapper[4756]: I1124 12:30:01.146318 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" event={"ID":"bfeb738d-4835-45a7-90a2-440d45459f4d","Type":"ContainerStarted","Data":"0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28"} Nov 24 12:30:01 crc kubenswrapper[4756]: I1124 12:30:01.195948 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:30:01 crc kubenswrapper[4756]: I1124 12:30:01.417915 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-29s2z" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="registry-server" probeResult="failure" output=< Nov 24 12:30:01 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:30:01 crc kubenswrapper[4756]: > Nov 24 12:30:01 crc kubenswrapper[4756]: I1124 12:30:01.767391 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqzr4" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="registry-server" probeResult="failure" output=< Nov 24 12:30:01 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:30:01 crc kubenswrapper[4756]: > Nov 24 12:30:02 crc kubenswrapper[4756]: I1124 12:30:02.154502 4756 generic.go:334] "Generic (PLEG): container finished" podID="bfeb738d-4835-45a7-90a2-440d45459f4d" containerID="50b2a6915da19d6edc0fc76894592f22ec1f0e1d6973aae18f36cb07595223ad" exitCode=0 Nov 24 12:30:02 crc kubenswrapper[4756]: I1124 12:30:02.154647 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" event={"ID":"bfeb738d-4835-45a7-90a2-440d45459f4d","Type":"ContainerDied","Data":"50b2a6915da19d6edc0fc76894592f22ec1f0e1d6973aae18f36cb07595223ad"} Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.544819 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.732120 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume\") pod \"bfeb738d-4835-45a7-90a2-440d45459f4d\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.732250 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume\") pod \"bfeb738d-4835-45a7-90a2-440d45459f4d\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.732290 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmgrg\" (UniqueName: \"kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg\") pod \"bfeb738d-4835-45a7-90a2-440d45459f4d\" (UID: \"bfeb738d-4835-45a7-90a2-440d45459f4d\") " Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.733246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "bfeb738d-4835-45a7-90a2-440d45459f4d" (UID: "bfeb738d-4835-45a7-90a2-440d45459f4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.738232 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg" (OuterVolumeSpecName: "kube-api-access-tmgrg") pod "bfeb738d-4835-45a7-90a2-440d45459f4d" (UID: "bfeb738d-4835-45a7-90a2-440d45459f4d"). InnerVolumeSpecName "kube-api-access-tmgrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.738626 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bfeb738d-4835-45a7-90a2-440d45459f4d" (UID: "bfeb738d-4835-45a7-90a2-440d45459f4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.833760 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfeb738d-4835-45a7-90a2-440d45459f4d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.833811 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmgrg\" (UniqueName: \"kubernetes.io/projected/bfeb738d-4835-45a7-90a2-440d45459f4d-kube-api-access-tmgrg\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:03 crc kubenswrapper[4756]: I1124 12:30:03.833827 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bfeb738d-4835-45a7-90a2-440d45459f4d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.171665 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" event={"ID":"bfeb738d-4835-45a7-90a2-440d45459f4d","Type":"ContainerDied","Data":"0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28"} Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.171718 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.171729 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba3ae833f68cca6564261433f6bfcd9c336e7a24e969e39e71b554f14662a28" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.218991 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.219627 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7jfcl" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="registry-server" containerID="cri-o://f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990" gracePeriod=2 Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.636517 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.746713 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities\") pod \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.746798 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content\") pod \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.746852 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wv2\" (UniqueName: \"kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2\") pod \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\" (UID: \"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b\") " Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.748245 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities" (OuterVolumeSpecName: "utilities") pod "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" (UID: "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.751069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2" (OuterVolumeSpecName: "kube-api-access-m4wv2") pod "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" (UID: "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b"). InnerVolumeSpecName "kube-api-access-m4wv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.769881 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" (UID: "e9c8ec10-6d9a-418c-bfb9-39a05c466e9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.848135 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.848201 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:04 crc kubenswrapper[4756]: I1124 12:30:04.848217 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wv2\" (UniqueName: \"kubernetes.io/projected/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b-kube-api-access-m4wv2\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.181140 4756 generic.go:334] "Generic (PLEG): container finished" podID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerID="f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990" exitCode=0 Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.181220 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerDied","Data":"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990"} Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.181292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jfcl" event={"ID":"e9c8ec10-6d9a-418c-bfb9-39a05c466e9b","Type":"ContainerDied","Data":"67e8cb3f30a50d3cc9b4c07c5031ff31fc9bf534af18ed1d9d2548918888f17f"} Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.181290 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jfcl" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.181315 4756 scope.go:117] "RemoveContainer" containerID="f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.202834 4756 scope.go:117] "RemoveContainer" containerID="14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.213385 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.217389 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jfcl"] Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.234287 4756 scope.go:117] "RemoveContainer" containerID="2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.252252 4756 scope.go:117] "RemoveContainer" containerID="f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990" Nov 24 12:30:05 crc kubenswrapper[4756]: E1124 12:30:05.252780 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990\": container with ID starting with f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990 not found: ID does not exist" containerID="f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.252822 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990"} err="failed to get container status \"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990\": rpc error: code = NotFound desc = could not find container \"f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990\": container with ID starting with f78169453d57035d857f1b68a7f24f6d8264b5f1979613e79cea5395017d2990 not found: ID does not exist" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.252912 4756 scope.go:117] "RemoveContainer" containerID="14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e" Nov 24 12:30:05 crc kubenswrapper[4756]: E1124 12:30:05.253623 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e\": container with ID starting with 14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e not found: ID does not exist" containerID="14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.253650 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e"} err="failed to get container status \"14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e\": rpc error: code = NotFound desc = could not find container \"14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e\": container with ID starting with 14fa2e6cc7497d32711d9b11db6ef28969be534cc2c2b8b2969825b008e4f89e not found: ID does not exist" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.253664 4756 scope.go:117] "RemoveContainer" containerID="2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1" Nov 24 12:30:05 crc kubenswrapper[4756]: E1124 12:30:05.253975 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1\": container with ID starting with 2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1 not found: ID does not exist" containerID="2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1" Nov 24 12:30:05 crc kubenswrapper[4756]: I1124 12:30:05.253995 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1"} err="failed to get container status \"2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1\": rpc error: code = NotFound desc = could not find container \"2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1\": container with ID starting with 2a66892e9ef9d11949d7face777d774347e662db8400572cfcf089019f290be1 not found: ID does not exist" Nov 24 12:30:06 crc kubenswrapper[4756]: I1124 12:30:06.483358 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" path="/var/lib/kubelet/pods/e9c8ec10-6d9a-418c-bfb9-39a05c466e9b/volumes" Nov 24 12:30:06 crc kubenswrapper[4756]: I1124 12:30:06.571445 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.006202 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.006391 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.061553 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.211997 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.240460 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.254179 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.585819 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.586136 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:07 crc kubenswrapper[4756]: I1124 12:30:07.631035 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:08 crc kubenswrapper[4756]: I1124 12:30:08.244535 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:08 crc kubenswrapper[4756]: I1124 12:30:08.972274 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:30:09 crc kubenswrapper[4756]: I1124 12:30:09.615872 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.214716 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca136cca-642a-419a-be24-241bbb527020" containerID="6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7" exitCode=0 Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.215933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerDied","Data":"6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7"} Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.426629 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.467262 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.769301 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:10 crc kubenswrapper[4756]: I1124 12:30:10.807127 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.222334 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerStarted","Data":"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa"} Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.223388 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6smp9" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="registry-server" containerID="cri-o://4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3" gracePeriod=2 Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.242887 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5cxgm" podStartSLOduration=3.2314746 podStartE2EDuration="45.242859116s" podCreationTimestamp="2025-11-24 12:29:26 +0000 UTC" firstStartedPulling="2025-11-24 12:29:28.608801562 +0000 UTC m=+100.966315704" lastFinishedPulling="2025-11-24 12:30:10.620186058 +0000 UTC m=+142.977700220" observedRunningTime="2025-11-24 12:30:11.241085287 +0000 UTC m=+143.598599449" watchObservedRunningTime="2025-11-24 12:30:11.242859116 +0000 UTC m=+143.600373268" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.687048 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.853684 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2\") pod \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.853791 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities\") pod \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.853839 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content\") pod \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\" (UID: \"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76\") " Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.855100 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities" (OuterVolumeSpecName: "utilities") pod "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" (UID: "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.860353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2" (OuterVolumeSpecName: "kube-api-access-8n8b2") pod "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" (UID: "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76"). InnerVolumeSpecName "kube-api-access-8n8b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.902734 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" (UID: "dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.956319 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-kube-api-access-8n8b2\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.956363 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:11 crc kubenswrapper[4756]: I1124 12:30:11.956374 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.233883 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerID="4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3" exitCode=0 Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.233932 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerDied","Data":"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3"} Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.233981 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6smp9" event={"ID":"dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76","Type":"ContainerDied","Data":"c3f57c016d3fc9a2bdb3232695105314b2e6df0af836ee462e0b8bd461e12dee"} Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.233992 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6smp9" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.234002 4756 scope.go:117] "RemoveContainer" containerID="4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.252319 4756 scope.go:117] "RemoveContainer" containerID="ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.269602 4756 scope.go:117] "RemoveContainer" containerID="fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.287368 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.290540 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6smp9"] Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.293049 4756 scope.go:117] "RemoveContainer" containerID="4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3" Nov 24 12:30:12 crc kubenswrapper[4756]: E1124 12:30:12.293951 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3\": container with ID starting with 4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3 not found: ID does not exist" containerID="4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.293995 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3"} err="failed to get container status \"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3\": rpc error: code = NotFound desc = could not find container \"4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3\": container with ID starting with 4cc47e85fbed29ec171dfe60e440e11cf0bede5e6da100b3ca946f4b0fd69df3 not found: ID does not exist" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.294028 4756 scope.go:117] "RemoveContainer" containerID="ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb" Nov 24 12:30:12 crc kubenswrapper[4756]: E1124 12:30:12.297844 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb\": container with ID starting with ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb not found: ID does not exist" containerID="ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.297916 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb"} err="failed to get container status \"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb\": rpc error: code = NotFound desc = could not find container \"ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb\": container with ID starting with ef135994c664fcf4e249336f0680a4452db6515a9243b34a69a5ac93bb5a0bcb not found: ID does not exist" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.297954 4756 scope.go:117] "RemoveContainer" containerID="fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44" Nov 24 12:30:12 crc kubenswrapper[4756]: E1124 12:30:12.299603 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44\": container with ID starting with fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44 not found: ID does not exist" containerID="fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.299664 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44"} err="failed to get container status \"fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44\": rpc error: code = NotFound desc = could not find container \"fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44\": container with ID starting with fc55a1cb9a475d9987606d5f5c3aca951d34b1dcfcddc46474d8c28e577ddc44 not found: ID does not exist" Nov 24 12:30:12 crc kubenswrapper[4756]: I1124 12:30:12.483688 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" path="/var/lib/kubelet/pods/dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76/volumes" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.016008 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.017730 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqzr4" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="registry-server" containerID="cri-o://be3a7a83fdf60c1a087362d94eb29172aca7d791f9137792a9642d525ab4424f" gracePeriod=2 Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.251205 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerID="be3a7a83fdf60c1a087362d94eb29172aca7d791f9137792a9642d525ab4424f" exitCode=0 Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.251371 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerDied","Data":"be3a7a83fdf60c1a087362d94eb29172aca7d791f9137792a9642d525ab4424f"} Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.431427 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.598968 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities\") pod \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.599127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhjq9\" (UniqueName: \"kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9\") pod \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.599217 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content\") pod \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\" (UID: \"3b977af7-ebc9-4742-8af6-10ab6cfb7877\") " Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.601799 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities" (OuterVolumeSpecName: "utilities") pod "3b977af7-ebc9-4742-8af6-10ab6cfb7877" (UID: "3b977af7-ebc9-4742-8af6-10ab6cfb7877"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.604347 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9" (OuterVolumeSpecName: "kube-api-access-rhjq9") pod "3b977af7-ebc9-4742-8af6-10ab6cfb7877" (UID: "3b977af7-ebc9-4742-8af6-10ab6cfb7877"). InnerVolumeSpecName "kube-api-access-rhjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.701025 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhjq9\" (UniqueName: \"kubernetes.io/projected/3b977af7-ebc9-4742-8af6-10ab6cfb7877-kube-api-access-rhjq9\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.701087 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.711671 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b977af7-ebc9-4742-8af6-10ab6cfb7877" (UID: "3b977af7-ebc9-4742-8af6-10ab6cfb7877"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:14 crc kubenswrapper[4756]: I1124 12:30:14.803052 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b977af7-ebc9-4742-8af6-10ab6cfb7877-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.261022 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqzr4" event={"ID":"3b977af7-ebc9-4742-8af6-10ab6cfb7877","Type":"ContainerDied","Data":"8e793f5ee13d334e89871846f2db702fc91dda9bed86d9aae93e5b71732c4311"} Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.261091 4756 scope.go:117] "RemoveContainer" containerID="be3a7a83fdf60c1a087362d94eb29172aca7d791f9137792a9642d525ab4424f" Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.261103 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqzr4" Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.280765 4756 scope.go:117] "RemoveContainer" containerID="1231df94d81a5d76c9e2c5878b874cf1e72b64d8cf803cb71ab20beac2e54d9c" Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.295462 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.298916 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqzr4"] Nov 24 12:30:15 crc kubenswrapper[4756]: I1124 12:30:15.320760 4756 scope.go:117] "RemoveContainer" containerID="909445e361a18957635b31c6fae8b12ebb2f773a598c9a194a53b68ae6dd8ccd" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.323833 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.325247 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.325425 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.325553 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.327751 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.328062 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.328278 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.336906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.338862 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.341389 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.349694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.350357 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.421528 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.434863 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.444641 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 12:30:16 crc kubenswrapper[4756]: I1124 12:30:16.491487 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" path="/var/lib/kubelet/pods/3b977af7-ebc9-4742-8af6-10ab6cfb7877/volumes" Nov 24 12:30:16 crc kubenswrapper[4756]: W1124 12:30:16.836168 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-263416dab303ea4e4ef37e412ad99d4252b030ad1c8b33d59761ab3ca836a385 WatchSource:0}: Error finding container 263416dab303ea4e4ef37e412ad99d4252b030ad1c8b33d59761ab3ca836a385: Status 404 returned error can't find the container with id 263416dab303ea4e4ef37e412ad99d4252b030ad1c8b33d59761ab3ca836a385 Nov 24 12:30:17 crc kubenswrapper[4756]: W1124 12:30:17.083189 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b6618d316c43926548cb8cb24a251903d9e37471fb4f3a046a0ebc43fbc09fd9 WatchSource:0}: Error finding container b6618d316c43926548cb8cb24a251903d9e37471fb4f3a046a0ebc43fbc09fd9: Status 404 returned error can't find the container with id b6618d316c43926548cb8cb24a251903d9e37471fb4f3a046a0ebc43fbc09fd9 Nov 24 12:30:17 crc kubenswrapper[4756]: W1124 12:30:17.132970 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f3dfd25e27d080fe339c5e94dc3c35207dd1e650774b0cfaa19c4ac21231e4f4 WatchSource:0}: Error finding container f3dfd25e27d080fe339c5e94dc3c35207dd1e650774b0cfaa19c4ac21231e4f4: Status 404 returned error can't find the container with id f3dfd25e27d080fe339c5e94dc3c35207dd1e650774b0cfaa19c4ac21231e4f4 Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.283652 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7ed8a56d0b9328bb2aa2f7922775519e18603386a733cd842dc008b05879d6e5"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.283708 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"263416dab303ea4e4ef37e412ad99d4252b030ad1c8b33d59761ab3ca836a385"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.283948 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.286015 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f12e81ff7cd3b58e121e903ccae9d0dc1ecc0530d78c256d6974671a8caa730f"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.286042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f3dfd25e27d080fe339c5e94dc3c35207dd1e650774b0cfaa19c4ac21231e4f4"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.289294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b2935cf88839e68a15767bea9abde065b3729dd9f17c986f1ef41ac7290f4795"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.289327 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b6618d316c43926548cb8cb24a251903d9e37471fb4f3a046a0ebc43fbc09fd9"} Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.361781 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.361844 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:17 crc kubenswrapper[4756]: I1124 12:30:17.413245 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:18 crc kubenswrapper[4756]: I1124 12:30:18.344415 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.417903 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.418972 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5cxgm" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="registry-server" containerID="cri-o://9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa" gracePeriod=2 Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.838348 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.898797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities\") pod \"ca136cca-642a-419a-be24-241bbb527020\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.898940 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content\") pod \"ca136cca-642a-419a-be24-241bbb527020\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.898966 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswgx\" (UniqueName: \"kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx\") pod \"ca136cca-642a-419a-be24-241bbb527020\" (UID: \"ca136cca-642a-419a-be24-241bbb527020\") " Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.901479 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities" (OuterVolumeSpecName: "utilities") pod "ca136cca-642a-419a-be24-241bbb527020" (UID: "ca136cca-642a-419a-be24-241bbb527020"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:20 crc kubenswrapper[4756]: I1124 12:30:20.919646 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx" (OuterVolumeSpecName: "kube-api-access-cswgx") pod "ca136cca-642a-419a-be24-241bbb527020" (UID: "ca136cca-642a-419a-be24-241bbb527020"). InnerVolumeSpecName "kube-api-access-cswgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.000613 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.000656 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswgx\" (UniqueName: \"kubernetes.io/projected/ca136cca-642a-419a-be24-241bbb527020-kube-api-access-cswgx\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.015202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca136cca-642a-419a-be24-241bbb527020" (UID: "ca136cca-642a-419a-be24-241bbb527020"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.101972 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca136cca-642a-419a-be24-241bbb527020-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.315720 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca136cca-642a-419a-be24-241bbb527020" containerID="9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa" exitCode=0 Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.315821 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cxgm" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.316003 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerDied","Data":"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa"} Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.316125 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cxgm" event={"ID":"ca136cca-642a-419a-be24-241bbb527020","Type":"ContainerDied","Data":"e132d63aee52014cb5b3e3234ed69c555f0e43c1ba0757c4cef808145e210228"} Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.316197 4756 scope.go:117] "RemoveContainer" containerID="9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.332699 4756 scope.go:117] "RemoveContainer" containerID="6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.348723 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.353966 4756 scope.go:117] "RemoveContainer" containerID="23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.354495 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5cxgm"] Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.377029 4756 scope.go:117] "RemoveContainer" containerID="9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa" Nov 24 12:30:21 crc kubenswrapper[4756]: E1124 12:30:21.377729 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa\": container with ID starting with 9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa not found: ID does not exist" containerID="9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.377792 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa"} err="failed to get container status \"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa\": rpc error: code = NotFound desc = could not find container \"9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa\": container with ID starting with 9281791db2ffbdb3235769e69a9687d6b232bfd802988ad42ae4ad327054b5aa not found: ID does not exist" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.377840 4756 scope.go:117] "RemoveContainer" containerID="6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7" Nov 24 12:30:21 crc kubenswrapper[4756]: E1124 12:30:21.378146 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7\": container with ID starting with 6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7 not found: ID does not exist" containerID="6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.378263 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7"} err="failed to get container status \"6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7\": rpc error: code = NotFound desc = could not find container \"6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7\": container with ID starting with 6a11dd08c1a64318a99d6701a439bdac6e8dd05ca81e985f088eb5f4a2ed19f7 not found: ID does not exist" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.378350 4756 scope.go:117] "RemoveContainer" containerID="23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86" Nov 24 12:30:21 crc kubenswrapper[4756]: E1124 12:30:21.378664 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86\": container with ID starting with 23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86 not found: ID does not exist" containerID="23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86" Nov 24 12:30:21 crc kubenswrapper[4756]: I1124 12:30:21.378746 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86"} err="failed to get container status \"23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86\": rpc error: code = NotFound desc = could not find container \"23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86\": container with ID starting with 23cfa332aa950d34b91351909b92e53aed04f123b461f6114756a0f0eeb47c86 not found: ID does not exist" Nov 24 12:30:22 crc kubenswrapper[4756]: I1124 12:30:22.485249 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca136cca-642a-419a-be24-241bbb527020" path="/var/lib/kubelet/pods/ca136cca-642a-419a-be24-241bbb527020/volumes" Nov 24 12:30:31 crc kubenswrapper[4756]: I1124 12:30:31.616408 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" containerID="cri-o://a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07" gracePeriod=15 Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.026523 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059223 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5768cdb577-svd7x"] Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059527 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059541 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059557 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059563 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059574 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059580 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059591 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059598 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059608 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059614 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059621 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059628 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059637 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059643 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059653 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfeb738d-4835-45a7-90a2-440d45459f4d" containerName="collect-profiles" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059659 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfeb738d-4835-45a7-90a2-440d45459f4d" containerName="collect-profiles" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059672 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059683 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059696 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059707 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059719 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059727 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="extract-content" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059744 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059751 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059763 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059769 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.059778 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059783 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="extract-utilities" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059873 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfeb738d-4835-45a7-90a2-440d45459f4d" containerName="collect-profiles" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059888 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca136cca-642a-419a-be24-241bbb527020" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059906 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerName="oauth-openshift" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059919 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c8ec10-6d9a-418c-bfb9-39a05c466e9b" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059930 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b977af7-ebc9-4742-8af6-10ab6cfb7877" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.059939 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc70eb17-e5c5-4cfb-8e13-fae2fe6c2f76" containerName="registry-server" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.060383 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061814 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlzl\" (UniqueName: \"kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061868 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061891 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061911 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061944 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061967 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.061986 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062011 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062026 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062051 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062071 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062091 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062110 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062134 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session\") pod \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\" (UID: \"35a6cd00-6612-4277-9e8b-ed71bdb5e01d\") " Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062245 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-service-ca\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062285 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-policies\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062334 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9pb\" (UniqueName: \"kubernetes.io/projected/bcbdd96c-fcce-4655-8d08-814b59cb6f08-kube-api-access-fm9pb\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.062399 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063074 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063117 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063203 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063469 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063549 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063572 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-error\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063606 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-dir\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063650 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-session\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063678 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-login\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063731 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-router-certs\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063764 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063816 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063829 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063842 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063855 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.063867 4756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.072308 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl" (OuterVolumeSpecName: "kube-api-access-9nlzl") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "kube-api-access-9nlzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.073227 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.074970 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.075557 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.076251 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.081570 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5768cdb577-svd7x"] Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.082951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.083064 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.083518 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.083698 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "35a6cd00-6612-4277-9e8b-ed71bdb5e01d" (UID: "35a6cd00-6612-4277-9e8b-ed71bdb5e01d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.164929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-service-ca\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165249 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-policies\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9pb\" (UniqueName: \"kubernetes.io/projected/bcbdd96c-fcce-4655-8d08-814b59cb6f08-kube-api-access-fm9pb\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165452 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165527 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165593 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-error\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165747 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165820 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-dir\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165894 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.165965 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-session\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-service-ca\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-dir\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-audit-policies\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-login\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166368 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-router-certs\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166377 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.166993 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167094 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167178 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167264 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167331 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167408 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167483 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlzl\" (UniqueName: \"kubernetes.io/projected/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-kube-api-access-9nlzl\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167544 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.167630 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/35a6cd00-6612-4277-9e8b-ed71bdb5e01d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.168105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169082 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169126 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169088 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-router-certs\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169514 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-error\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.169889 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-login\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.170742 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.171749 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcbdd96c-fcce-4655-8d08-814b59cb6f08-v4-0-config-system-session\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.182243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9pb\" (UniqueName: \"kubernetes.io/projected/bcbdd96c-fcce-4655-8d08-814b59cb6f08-kube-api-access-fm9pb\") pod \"oauth-openshift-5768cdb577-svd7x\" (UID: \"bcbdd96c-fcce-4655-8d08-814b59cb6f08\") " pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.380385 4756 generic.go:334] "Generic (PLEG): container finished" podID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" containerID="a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07" exitCode=0 Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.380439 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" event={"ID":"35a6cd00-6612-4277-9e8b-ed71bdb5e01d","Type":"ContainerDied","Data":"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07"} Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.380468 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" event={"ID":"35a6cd00-6612-4277-9e8b-ed71bdb5e01d","Type":"ContainerDied","Data":"ed8c8bb37da683e7ab95c3de0f37dbc3af7f895c0cbab6b5f2f77363f33a570b"} Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.380485 4756 scope.go:117] "RemoveContainer" containerID="a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.380444 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nmhtt" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.398863 4756 scope.go:117] "RemoveContainer" containerID="a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07" Nov 24 12:30:32 crc kubenswrapper[4756]: E1124 12:30:32.399285 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07\": container with ID starting with a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07 not found: ID does not exist" containerID="a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.399319 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07"} err="failed to get container status \"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07\": rpc error: code = NotFound desc = could not find container \"a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07\": container with ID starting with a6710f37ec5a7f43f92a289d92bd05c29202c370ee22464b01277d6b57bd0f07 not found: ID does not exist" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.406411 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.409790 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nmhtt"] Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.425171 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.483716 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a6cd00-6612-4277-9e8b-ed71bdb5e01d" path="/var/lib/kubelet/pods/35a6cd00-6612-4277-9e8b-ed71bdb5e01d/volumes" Nov 24 12:30:32 crc kubenswrapper[4756]: I1124 12:30:32.859444 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5768cdb577-svd7x"] Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.402313 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" event={"ID":"bcbdd96c-fcce-4655-8d08-814b59cb6f08","Type":"ContainerStarted","Data":"43dd31677b7ba225f9e15f506a54903b172e0ff5745ac1ef35aa0cb4104a995a"} Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.402751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" event={"ID":"bcbdd96c-fcce-4655-8d08-814b59cb6f08","Type":"ContainerStarted","Data":"806d3dc38f28fa6b0fd1b4a2c59e1f7034d8c4bb6e4cd60a44f69382418d4c25"} Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.402784 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.415726 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.431757 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5768cdb577-svd7x" podStartSLOduration=27.43173568 podStartE2EDuration="27.43173568s" podCreationTimestamp="2025-11-24 12:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:30:33.429081253 +0000 UTC m=+165.786595485" watchObservedRunningTime="2025-11-24 12:30:33.43173568 +0000 UTC m=+165.789249822" Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.479464 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:30:33 crc kubenswrapper[4756]: I1124 12:30:33.479560 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:30:46 crc kubenswrapper[4756]: I1124 12:30:46.539922 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 12:31:03 crc kubenswrapper[4756]: I1124 12:31:03.479852 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:31:03 crc kubenswrapper[4756]: I1124 12:31:03.480588 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.632170 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.633134 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nb9nl" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="registry-server" containerID="cri-o://c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505" gracePeriod=30 Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.635829 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.636257 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bdhwb" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="registry-server" containerID="cri-o://39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918" gracePeriod=30 Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.643898 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.644361 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" containerID="cri-o://9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511" gracePeriod=30 Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.668859 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.669741 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8d29p" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="registry-server" containerID="cri-o://883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4" gracePeriod=30 Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.689257 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.689688 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-29s2z" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="registry-server" containerID="cri-o://143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5" gracePeriod=30 Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.708274 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzrdb"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.709331 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.711620 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzrdb"] Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.740050 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.740149 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.740244 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jq68\" (UniqueName: \"kubernetes.io/projected/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-kube-api-access-4jq68\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.841279 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jq68\" (UniqueName: \"kubernetes.io/projected/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-kube-api-access-4jq68\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.841361 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.841401 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.843263 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.849841 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:04 crc kubenswrapper[4756]: I1124 12:31:04.859856 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jq68\" (UniqueName: \"kubernetes.io/projected/7cc1cad9-8e95-4b2c-bfb2-dd376178315f-kube-api-access-4jq68\") pod \"marketplace-operator-79b997595-dzrdb\" (UID: \"7cc1cad9-8e95-4b2c-bfb2-dd376178315f\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.054508 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.068785 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.115936 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.122254 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.206301 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.218815 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.248772 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics\") pod \"3ff06ae8-376c-4319-9358-200e6f312237\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.248837 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content\") pod \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.248876 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw95z\" (UniqueName: \"kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z\") pod \"3ff06ae8-376c-4319-9358-200e6f312237\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.248955 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca\") pod \"3ff06ae8-376c-4319-9358-200e6f312237\" (UID: \"3ff06ae8-376c-4319-9358-200e6f312237\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.248991 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities\") pod \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.249012 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gclf\" (UniqueName: \"kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf\") pod \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.249037 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn8pg\" (UniqueName: \"kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg\") pod \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.249063 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities\") pod \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\" (UID: \"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.249085 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content\") pod \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\" (UID: \"a3307bf9-529c-4d16-8a9c-87b46c381ca6\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.255374 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities" (OuterVolumeSpecName: "utilities") pod "a3307bf9-529c-4d16-8a9c-87b46c381ca6" (UID: "a3307bf9-529c-4d16-8a9c-87b46c381ca6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.257131 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3ff06ae8-376c-4319-9358-200e6f312237" (UID: "3ff06ae8-376c-4319-9358-200e6f312237"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.258918 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z" (OuterVolumeSpecName: "kube-api-access-xw95z") pod "3ff06ae8-376c-4319-9358-200e6f312237" (UID: "3ff06ae8-376c-4319-9358-200e6f312237"). InnerVolumeSpecName "kube-api-access-xw95z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.261654 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3ff06ae8-376c-4319-9358-200e6f312237" (UID: "3ff06ae8-376c-4319-9358-200e6f312237"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.265648 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg" (OuterVolumeSpecName: "kube-api-access-tn8pg") pod "a3307bf9-529c-4d16-8a9c-87b46c381ca6" (UID: "a3307bf9-529c-4d16-8a9c-87b46c381ca6"). InnerVolumeSpecName "kube-api-access-tn8pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.267234 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf" (OuterVolumeSpecName: "kube-api-access-4gclf") pod "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" (UID: "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4"). InnerVolumeSpecName "kube-api-access-4gclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.284107 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities" (OuterVolumeSpecName: "utilities") pod "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" (UID: "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.333391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" (UID: "1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.339501 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3307bf9-529c-4d16-8a9c-87b46c381ca6" (UID: "a3307bf9-529c-4d16-8a9c-87b46c381ca6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.349967 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content\") pod \"ef577d14-3e75-4898-a0d8-cf7f03912760\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7wjj\" (UniqueName: \"kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj\") pod \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350080 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities\") pod \"ef577d14-3e75-4898-a0d8-cf7f03912760\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350099 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities\") pod \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350178 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content\") pod \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\" (UID: \"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350256 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkn8b\" (UniqueName: \"kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b\") pod \"ef577d14-3e75-4898-a0d8-cf7f03912760\" (UID: \"ef577d14-3e75-4898-a0d8-cf7f03912760\") " Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350469 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350486 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350497 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gclf\" (UniqueName: \"kubernetes.io/projected/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-kube-api-access-4gclf\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350510 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn8pg\" (UniqueName: \"kubernetes.io/projected/a3307bf9-529c-4d16-8a9c-87b46c381ca6-kube-api-access-tn8pg\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350518 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350526 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3307bf9-529c-4d16-8a9c-87b46c381ca6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350534 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3ff06ae8-376c-4319-9358-200e6f312237-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350542 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350550 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw95z\" (UniqueName: \"kubernetes.io/projected/3ff06ae8-376c-4319-9358-200e6f312237-kube-api-access-xw95z\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.350829 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities" (OuterVolumeSpecName: "utilities") pod "ef577d14-3e75-4898-a0d8-cf7f03912760" (UID: "ef577d14-3e75-4898-a0d8-cf7f03912760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.351074 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities" (OuterVolumeSpecName: "utilities") pod "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" (UID: "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.353840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj" (OuterVolumeSpecName: "kube-api-access-l7wjj") pod "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" (UID: "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e"). InnerVolumeSpecName "kube-api-access-l7wjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.354875 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b" (OuterVolumeSpecName: "kube-api-access-bkn8b") pod "ef577d14-3e75-4898-a0d8-cf7f03912760" (UID: "ef577d14-3e75-4898-a0d8-cf7f03912760"). InnerVolumeSpecName "kube-api-access-bkn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.373035 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef577d14-3e75-4898-a0d8-cf7f03912760" (UID: "ef577d14-3e75-4898-a0d8-cf7f03912760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.436376 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" (UID: "609d4ed3-7c39-4c20-9c1a-1b536ef07a7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452081 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452141 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkn8b\" (UniqueName: \"kubernetes.io/projected/ef577d14-3e75-4898-a0d8-cf7f03912760-kube-api-access-bkn8b\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452229 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452242 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7wjj\" (UniqueName: \"kubernetes.io/projected/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-kube-api-access-l7wjj\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452254 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.452289 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef577d14-3e75-4898-a0d8-cf7f03912760-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.601263 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzrdb"] Nov 24 12:31:05 crc kubenswrapper[4756]: W1124 12:31:05.617248 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc1cad9_8e95_4b2c_bfb2_dd376178315f.slice/crio-da84fea0f55af047fc4b13fdf503551e01c0aed1b8f9a11b81307c133ce1af08 WatchSource:0}: Error finding container da84fea0f55af047fc4b13fdf503551e01c0aed1b8f9a11b81307c133ce1af08: Status 404 returned error can't find the container with id da84fea0f55af047fc4b13fdf503551e01c0aed1b8f9a11b81307c133ce1af08 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.629625 4756 generic.go:334] "Generic (PLEG): container finished" podID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerID="c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505" exitCode=0 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.629688 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerDied","Data":"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.629725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nb9nl" event={"ID":"1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4","Type":"ContainerDied","Data":"be12c491411871c440f977ad17982ef750568f47e33f93d162521ccbe65c27ba"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.629730 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nb9nl" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.629744 4756 scope.go:117] "RemoveContainer" containerID="c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.634781 4756 generic.go:334] "Generic (PLEG): container finished" podID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerID="39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918" exitCode=0 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.634822 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerDied","Data":"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.634837 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdhwb" event={"ID":"a3307bf9-529c-4d16-8a9c-87b46c381ca6","Type":"ContainerDied","Data":"43f9f97d2c9172e8b325b20185c63bd3be7ec7ba219b5f6669476e69ff91e97c"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.634892 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdhwb" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.641240 4756 generic.go:334] "Generic (PLEG): container finished" podID="3ff06ae8-376c-4319-9358-200e6f312237" containerID="9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511" exitCode=0 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.641326 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" event={"ID":"3ff06ae8-376c-4319-9358-200e6f312237","Type":"ContainerDied","Data":"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.641342 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" event={"ID":"3ff06ae8-376c-4319-9358-200e6f312237","Type":"ContainerDied","Data":"7ccb6e20c53fc927537b21f5ef128c76d74d1b9ebb8384f27d46808d508504da"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.641376 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbhls" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.652029 4756 generic.go:334] "Generic (PLEG): container finished" podID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerID="143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5" exitCode=0 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.652145 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerDied","Data":"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.652222 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-29s2z" event={"ID":"609d4ed3-7c39-4c20-9c1a-1b536ef07a7e","Type":"ContainerDied","Data":"aca1711cffe68954ab372653a32fe8cbbc0fd224d35465b0a115c5c92e7fb2c4"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.652315 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-29s2z" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.656476 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerID="883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4" exitCode=0 Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.656618 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8d29p" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.656646 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerDied","Data":"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.657392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8d29p" event={"ID":"ef577d14-3e75-4898-a0d8-cf7f03912760","Type":"ContainerDied","Data":"4f7907c151c8e22eecb8a99c9651f992127903b46ad5c79496f87e86889d2244"} Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.674018 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.674176 4756 scope.go:117] "RemoveContainer" containerID="d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.676732 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nb9nl"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.711016 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.717753 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbhls"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.725245 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.726788 4756 scope.go:117] "RemoveContainer" containerID="b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.737281 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bdhwb"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.741999 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.748144 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8d29p"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.751104 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.756631 4756 scope.go:117] "RemoveContainer" containerID="c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.757187 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505\": container with ID starting with c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505 not found: ID does not exist" containerID="c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.757256 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505"} err="failed to get container status \"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505\": rpc error: code = NotFound desc = could not find container \"c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505\": container with ID starting with c798eccd3660a835606ea7a58605a4dba48057dcdc79bbf74a02c5fc93769505 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.757300 4756 scope.go:117] "RemoveContainer" containerID="d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.757704 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99\": container with ID starting with d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99 not found: ID does not exist" containerID="d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.757742 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99"} err="failed to get container status \"d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99\": rpc error: code = NotFound desc = could not find container \"d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99\": container with ID starting with d2d8938d749316a6b06b393f6a7c0fe02ece78c91f47337667734594a98ffb99 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.757769 4756 scope.go:117] "RemoveContainer" containerID="b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.758025 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175\": container with ID starting with b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175 not found: ID does not exist" containerID="b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.758068 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175"} err="failed to get container status \"b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175\": rpc error: code = NotFound desc = could not find container \"b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175\": container with ID starting with b8a6138be86e1224816bddeda374445ceba0cd39d6ecf44fd1f19af373850175 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.758093 4756 scope.go:117] "RemoveContainer" containerID="39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.759219 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-29s2z"] Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.772058 4756 scope.go:117] "RemoveContainer" containerID="a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.790004 4756 scope.go:117] "RemoveContainer" containerID="e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.837269 4756 scope.go:117] "RemoveContainer" containerID="39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.837642 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918\": container with ID starting with 39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918 not found: ID does not exist" containerID="39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.837675 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918"} err="failed to get container status \"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918\": rpc error: code = NotFound desc = could not find container \"39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918\": container with ID starting with 39856e30705eb367f62342113dc9a825ea190e2daeee5ce3aeb074ebce6c4918 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.837703 4756 scope.go:117] "RemoveContainer" containerID="a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.838010 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9\": container with ID starting with a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9 not found: ID does not exist" containerID="a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.838089 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9"} err="failed to get container status \"a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9\": rpc error: code = NotFound desc = could not find container \"a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9\": container with ID starting with a7030d26f08a7b3d2bd0c2ba04c353323c10804d42883b756a554100100380a9 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.838164 4756 scope.go:117] "RemoveContainer" containerID="e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.838534 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52\": container with ID starting with e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52 not found: ID does not exist" containerID="e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.838562 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52"} err="failed to get container status \"e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52\": rpc error: code = NotFound desc = could not find container \"e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52\": container with ID starting with e49fb446c913e437497eac1e954325dc5dd9b0a17dbee77e24ceb3bba4dd2a52 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.838576 4756 scope.go:117] "RemoveContainer" containerID="9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.849596 4756 scope.go:117] "RemoveContainer" containerID="9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.849997 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511\": container with ID starting with 9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511 not found: ID does not exist" containerID="9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.850028 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511"} err="failed to get container status \"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511\": rpc error: code = NotFound desc = could not find container \"9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511\": container with ID starting with 9e4ed8f3244e70600d1acdbd5e287e7a52c93cd48e3eb0fe869674be3bf4d511 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.850051 4756 scope.go:117] "RemoveContainer" containerID="143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.866646 4756 scope.go:117] "RemoveContainer" containerID="9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.890664 4756 scope.go:117] "RemoveContainer" containerID="d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.905827 4756 scope.go:117] "RemoveContainer" containerID="143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.906530 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5\": container with ID starting with 143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5 not found: ID does not exist" containerID="143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.906574 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5"} err="failed to get container status \"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5\": rpc error: code = NotFound desc = could not find container \"143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5\": container with ID starting with 143daebb13efa36f19aea504d071fa2576cad2de36dde80bae5f6d6b1bdbf2e5 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.906606 4756 scope.go:117] "RemoveContainer" containerID="9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.907377 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea\": container with ID starting with 9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea not found: ID does not exist" containerID="9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.907403 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea"} err="failed to get container status \"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea\": rpc error: code = NotFound desc = could not find container \"9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea\": container with ID starting with 9d65b3a17cac10a83fb2ae834eb67ccca64333e30c75d7e908536042ee6d2fea not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.907418 4756 scope.go:117] "RemoveContainer" containerID="d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.907745 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf\": container with ID starting with d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf not found: ID does not exist" containerID="d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.907769 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf"} err="failed to get container status \"d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf\": rpc error: code = NotFound desc = could not find container \"d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf\": container with ID starting with d44de153ed0815c40a9ff48379ed0f759cdd6921a5be44ecdde918e8d99875cf not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.907785 4756 scope.go:117] "RemoveContainer" containerID="883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.928330 4756 scope.go:117] "RemoveContainer" containerID="fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.946380 4756 scope.go:117] "RemoveContainer" containerID="3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.964203 4756 scope.go:117] "RemoveContainer" containerID="883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.964724 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4\": container with ID starting with 883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4 not found: ID does not exist" containerID="883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.964754 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4"} err="failed to get container status \"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4\": rpc error: code = NotFound desc = could not find container \"883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4\": container with ID starting with 883fd2e2fdce58c71b1c77af103b047dccd44a41715e848810a81f9e9ea9c8c4 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.964780 4756 scope.go:117] "RemoveContainer" containerID="fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.965071 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890\": container with ID starting with fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890 not found: ID does not exist" containerID="fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.965093 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890"} err="failed to get container status \"fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890\": rpc error: code = NotFound desc = could not find container \"fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890\": container with ID starting with fdabda56bd9928fb5e5d67fb20034916d54968e1d2f9c2e010d66e56b5cac890 not found: ID does not exist" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.965112 4756 scope.go:117] "RemoveContainer" containerID="3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef" Nov 24 12:31:05 crc kubenswrapper[4756]: E1124 12:31:05.966280 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef\": container with ID starting with 3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef not found: ID does not exist" containerID="3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef" Nov 24 12:31:05 crc kubenswrapper[4756]: I1124 12:31:05.966305 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef"} err="failed to get container status \"3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef\": rpc error: code = NotFound desc = could not find container \"3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef\": container with ID starting with 3c3527efed8c3bfb71374b867623fd7b4cbc5cf296ebe599c0e0736e87ebb4ef not found: ID does not exist" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.487339 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" path="/var/lib/kubelet/pods/1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4/volumes" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.489274 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff06ae8-376c-4319-9358-200e6f312237" path="/var/lib/kubelet/pods/3ff06ae8-376c-4319-9358-200e6f312237/volumes" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.490376 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" path="/var/lib/kubelet/pods/609d4ed3-7c39-4c20-9c1a-1b536ef07a7e/volumes" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.492599 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" path="/var/lib/kubelet/pods/a3307bf9-529c-4d16-8a9c-87b46c381ca6/volumes" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.493925 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" path="/var/lib/kubelet/pods/ef577d14-3e75-4898-a0d8-cf7f03912760/volumes" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.669726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" event={"ID":"7cc1cad9-8e95-4b2c-bfb2-dd376178315f","Type":"ContainerStarted","Data":"50c553909e24d924f5774bd66229cd0d60ed84c138c337bd9ff397fab672acea"} Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.670176 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" event={"ID":"7cc1cad9-8e95-4b2c-bfb2-dd376178315f","Type":"ContainerStarted","Data":"da84fea0f55af047fc4b13fdf503551e01c0aed1b8f9a11b81307c133ce1af08"} Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.670514 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.675718 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.692005 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dzrdb" podStartSLOduration=2.691980734 podStartE2EDuration="2.691980734s" podCreationTimestamp="2025-11-24 12:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:31:06.688962868 +0000 UTC m=+199.046477000" watchObservedRunningTime="2025-11-24 12:31:06.691980734 +0000 UTC m=+199.049494906" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.833610 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmncc"] Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.833941 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.833966 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.833987 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.833999 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834017 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834029 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834044 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834056 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834070 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834081 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834097 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834108 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834123 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834133 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834150 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834184 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834203 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834216 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834237 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834248 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834263 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834275 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834292 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834303 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="extract-content" Nov 24 12:31:06 crc kubenswrapper[4756]: E1124 12:31:06.834317 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834327 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="extract-utilities" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834473 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef577d14-3e75-4898-a0d8-cf7f03912760" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834494 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff06ae8-376c-4319-9358-200e6f312237" containerName="marketplace-operator" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834507 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="609d4ed3-7c39-4c20-9c1a-1b536ef07a7e" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834527 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3307bf9-529c-4d16-8a9c-87b46c381ca6" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.834542 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed239e1-dc53-4ea6-89a4-4cff0bf2b0c4" containerName="registry-server" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.835551 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.839772 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.852344 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmncc"] Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.972854 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78xct\" (UniqueName: \"kubernetes.io/projected/010138d9-b91f-41a7-80a7-468667e43d51-kube-api-access-78xct\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.972938 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-catalog-content\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:06 crc kubenswrapper[4756]: I1124 12:31:06.973011 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-utilities\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.033274 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l4qxn"] Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.034916 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.037825 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.050313 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4qxn"] Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.074740 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78xct\" (UniqueName: \"kubernetes.io/projected/010138d9-b91f-41a7-80a7-468667e43d51-kube-api-access-78xct\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.074814 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-catalog-content\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.074858 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-utilities\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.075524 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-catalog-content\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.075566 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/010138d9-b91f-41a7-80a7-468667e43d51-utilities\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.100616 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78xct\" (UniqueName: \"kubernetes.io/projected/010138d9-b91f-41a7-80a7-468667e43d51-kube-api-access-78xct\") pod \"redhat-marketplace-nmncc\" (UID: \"010138d9-b91f-41a7-80a7-468667e43d51\") " pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.159309 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.176402 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-catalog-content\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.176501 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-utilities\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.176603 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhn6\" (UniqueName: \"kubernetes.io/projected/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-kube-api-access-7zhn6\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.278371 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhn6\" (UniqueName: \"kubernetes.io/projected/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-kube-api-access-7zhn6\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.280587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-catalog-content\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.281070 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-utilities\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.282387 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-utilities\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.282774 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-catalog-content\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.300830 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhn6\" (UniqueName: \"kubernetes.io/projected/cae50a20-6a68-4c81-a165-6eaeca6bcf3e-kube-api-access-7zhn6\") pod \"certified-operators-l4qxn\" (UID: \"cae50a20-6a68-4c81-a165-6eaeca6bcf3e\") " pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.371995 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.587358 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmncc"] Nov 24 12:31:07 crc kubenswrapper[4756]: W1124 12:31:07.589862 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod010138d9_b91f_41a7_80a7_468667e43d51.slice/crio-a6fb2a6c10a29420dfd082cbbbbc57834e2d1668c041d350b414c5d72e8673d9 WatchSource:0}: Error finding container a6fb2a6c10a29420dfd082cbbbbc57834e2d1668c041d350b414c5d72e8673d9: Status 404 returned error can't find the container with id a6fb2a6c10a29420dfd082cbbbbc57834e2d1668c041d350b414c5d72e8673d9 Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.683900 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmncc" event={"ID":"010138d9-b91f-41a7-80a7-468667e43d51","Type":"ContainerStarted","Data":"a6fb2a6c10a29420dfd082cbbbbc57834e2d1668c041d350b414c5d72e8673d9"} Nov 24 12:31:07 crc kubenswrapper[4756]: I1124 12:31:07.765658 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4qxn"] Nov 24 12:31:07 crc kubenswrapper[4756]: W1124 12:31:07.798864 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae50a20_6a68_4c81_a165_6eaeca6bcf3e.slice/crio-07d4ef324c237d014231afd424275ceb55ec756d35f53566b9b346fbede75ca5 WatchSource:0}: Error finding container 07d4ef324c237d014231afd424275ceb55ec756d35f53566b9b346fbede75ca5: Status 404 returned error can't find the container with id 07d4ef324c237d014231afd424275ceb55ec756d35f53566b9b346fbede75ca5 Nov 24 12:31:08 crc kubenswrapper[4756]: I1124 12:31:08.692699 4756 generic.go:334] "Generic (PLEG): container finished" podID="010138d9-b91f-41a7-80a7-468667e43d51" containerID="010a4169661fc108ba434b1180d9b18786bedc11e4c99ac10fa7c2f978bc0a4e" exitCode=0 Nov 24 12:31:08 crc kubenswrapper[4756]: I1124 12:31:08.692823 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmncc" event={"ID":"010138d9-b91f-41a7-80a7-468667e43d51","Type":"ContainerDied","Data":"010a4169661fc108ba434b1180d9b18786bedc11e4c99ac10fa7c2f978bc0a4e"} Nov 24 12:31:08 crc kubenswrapper[4756]: I1124 12:31:08.699723 4756 generic.go:334] "Generic (PLEG): container finished" podID="cae50a20-6a68-4c81-a165-6eaeca6bcf3e" containerID="f85ec96d03d9a4c28e04e7c3ea39f327928c43ccd73f8033b8589e5154121c5a" exitCode=0 Nov 24 12:31:08 crc kubenswrapper[4756]: I1124 12:31:08.700828 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qxn" event={"ID":"cae50a20-6a68-4c81-a165-6eaeca6bcf3e","Type":"ContainerDied","Data":"f85ec96d03d9a4c28e04e7c3ea39f327928c43ccd73f8033b8589e5154121c5a"} Nov 24 12:31:08 crc kubenswrapper[4756]: I1124 12:31:08.700862 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qxn" event={"ID":"cae50a20-6a68-4c81-a165-6eaeca6bcf3e","Type":"ContainerStarted","Data":"07d4ef324c237d014231afd424275ceb55ec756d35f53566b9b346fbede75ca5"} Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.234993 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.236753 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.241055 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.246073 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.410432 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.410495 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.410535 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zssqs\" (UniqueName: \"kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.431279 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.432714 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.434650 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.442109 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.511736 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.511845 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zssqs\" (UniqueName: \"kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.511890 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.512507 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.513663 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.531219 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zssqs\" (UniqueName: \"kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs\") pod \"redhat-operators-tbh8f\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.559718 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.619655 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zdp\" (UniqueName: \"kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.619716 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.619862 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.705954 4756 generic.go:334] "Generic (PLEG): container finished" podID="010138d9-b91f-41a7-80a7-468667e43d51" containerID="d1b0e5bd40fb71d5222c7eedf544878c764da924d534d12ef59497abe93ec76a" exitCode=0 Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.706024 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmncc" event={"ID":"010138d9-b91f-41a7-80a7-468667e43d51","Type":"ContainerDied","Data":"d1b0e5bd40fb71d5222c7eedf544878c764da924d534d12ef59497abe93ec76a"} Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.708726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qxn" event={"ID":"cae50a20-6a68-4c81-a165-6eaeca6bcf3e","Type":"ContainerStarted","Data":"89afcc95b27c71d33426e54218020dee006ee867d70b704b8904914ea356ea2c"} Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.721938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.722009 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zdp\" (UniqueName: \"kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.722033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.723144 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.723207 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.743813 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zdp\" (UniqueName: \"kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp\") pod \"community-operators-5kgmg\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.767013 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 12:31:09 crc kubenswrapper[4756]: I1124 12:31:09.781061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:09 crc kubenswrapper[4756]: W1124 12:31:09.834967 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e23dac_91d1_47d0_9ae7_96ee82cd8749.slice/crio-3c2b3e925d1328bf6b3d41e600aca3fe6b31649e1dca2d2cd13fa2957691bad8 WatchSource:0}: Error finding container 3c2b3e925d1328bf6b3d41e600aca3fe6b31649e1dca2d2cd13fa2957691bad8: Status 404 returned error can't find the container with id 3c2b3e925d1328bf6b3d41e600aca3fe6b31649e1dca2d2cd13fa2957691bad8 Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.248370 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:31:10 crc kubenswrapper[4756]: W1124 12:31:10.257816 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda32ab267_a3aa_4fa5_80e5_ebbe78465af3.slice/crio-4c0ddf494150de85fd2e881ed06f7ae68957b67ef5961af825d7d7ea19c94ceb WatchSource:0}: Error finding container 4c0ddf494150de85fd2e881ed06f7ae68957b67ef5961af825d7d7ea19c94ceb: Status 404 returned error can't find the container with id 4c0ddf494150de85fd2e881ed06f7ae68957b67ef5961af825d7d7ea19c94ceb Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.736974 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmncc" event={"ID":"010138d9-b91f-41a7-80a7-468667e43d51","Type":"ContainerStarted","Data":"3d85cc96bc9fe3ad4891e3961e29507dc7a813bd548a48404de728cdcb9a2f41"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.740891 4756 generic.go:334] "Generic (PLEG): container finished" podID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerID="a9d4f9c6e9a9463e7710c8e73df8d3b7eae7322a84c6d3b71cc908011f566f5e" exitCode=0 Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.740996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerDied","Data":"a9d4f9c6e9a9463e7710c8e73df8d3b7eae7322a84c6d3b71cc908011f566f5e"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.741056 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerStarted","Data":"3c2b3e925d1328bf6b3d41e600aca3fe6b31649e1dca2d2cd13fa2957691bad8"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.748390 4756 generic.go:334] "Generic (PLEG): container finished" podID="cae50a20-6a68-4c81-a165-6eaeca6bcf3e" containerID="89afcc95b27c71d33426e54218020dee006ee867d70b704b8904914ea356ea2c" exitCode=0 Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.748468 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qxn" event={"ID":"cae50a20-6a68-4c81-a165-6eaeca6bcf3e","Type":"ContainerDied","Data":"89afcc95b27c71d33426e54218020dee006ee867d70b704b8904914ea356ea2c"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.756954 4756 generic.go:334] "Generic (PLEG): container finished" podID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerID="a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb" exitCode=0 Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.756999 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerDied","Data":"a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.757025 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerStarted","Data":"4c0ddf494150de85fd2e881ed06f7ae68957b67ef5961af825d7d7ea19c94ceb"} Nov 24 12:31:10 crc kubenswrapper[4756]: I1124 12:31:10.781412 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmncc" podStartSLOduration=3.359630964 podStartE2EDuration="4.781390993s" podCreationTimestamp="2025-11-24 12:31:06 +0000 UTC" firstStartedPulling="2025-11-24 12:31:08.694991437 +0000 UTC m=+201.052505579" lastFinishedPulling="2025-11-24 12:31:10.116751466 +0000 UTC m=+202.474265608" observedRunningTime="2025-11-24 12:31:10.762359329 +0000 UTC m=+203.119873471" watchObservedRunningTime="2025-11-24 12:31:10.781390993 +0000 UTC m=+203.138905145" Nov 24 12:31:11 crc kubenswrapper[4756]: I1124 12:31:11.762940 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerStarted","Data":"f616cb7c1d6f4cab43ebf0fdb6aefe95042c38b27fb3b3d20f9df14800242ab5"} Nov 24 12:31:11 crc kubenswrapper[4756]: I1124 12:31:11.764889 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4qxn" event={"ID":"cae50a20-6a68-4c81-a165-6eaeca6bcf3e","Type":"ContainerStarted","Data":"c0069eff2159b1b045883fbb6bb0c18a4dc92363d6235d965b7b596da6c5b7d9"} Nov 24 12:31:11 crc kubenswrapper[4756]: I1124 12:31:11.800446 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l4qxn" podStartSLOduration=2.35807622 podStartE2EDuration="4.800420151s" podCreationTimestamp="2025-11-24 12:31:07 +0000 UTC" firstStartedPulling="2025-11-24 12:31:08.701582045 +0000 UTC m=+201.059096187" lastFinishedPulling="2025-11-24 12:31:11.143925986 +0000 UTC m=+203.501440118" observedRunningTime="2025-11-24 12:31:11.800149913 +0000 UTC m=+204.157664085" watchObservedRunningTime="2025-11-24 12:31:11.800420151 +0000 UTC m=+204.157934293" Nov 24 12:31:12 crc kubenswrapper[4756]: I1124 12:31:12.772097 4756 generic.go:334] "Generic (PLEG): container finished" podID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerID="f616cb7c1d6f4cab43ebf0fdb6aefe95042c38b27fb3b3d20f9df14800242ab5" exitCode=0 Nov 24 12:31:12 crc kubenswrapper[4756]: I1124 12:31:12.772202 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerDied","Data":"f616cb7c1d6f4cab43ebf0fdb6aefe95042c38b27fb3b3d20f9df14800242ab5"} Nov 24 12:31:12 crc kubenswrapper[4756]: I1124 12:31:12.774745 4756 generic.go:334] "Generic (PLEG): container finished" podID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerID="ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6" exitCode=0 Nov 24 12:31:12 crc kubenswrapper[4756]: I1124 12:31:12.774819 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerDied","Data":"ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6"} Nov 24 12:31:14 crc kubenswrapper[4756]: I1124 12:31:14.791809 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerStarted","Data":"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b"} Nov 24 12:31:14 crc kubenswrapper[4756]: I1124 12:31:14.795299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerStarted","Data":"16f8604364752474f86d8e0d224a02c4e3b607dd4fe6f95011b37d145484a306"} Nov 24 12:31:14 crc kubenswrapper[4756]: I1124 12:31:14.815138 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kgmg" podStartSLOduration=3.404110406 podStartE2EDuration="5.815112952s" podCreationTimestamp="2025-11-24 12:31:09 +0000 UTC" firstStartedPulling="2025-11-24 12:31:10.762122222 +0000 UTC m=+203.119636364" lastFinishedPulling="2025-11-24 12:31:13.173124768 +0000 UTC m=+205.530638910" observedRunningTime="2025-11-24 12:31:14.81294439 +0000 UTC m=+207.170458532" watchObservedRunningTime="2025-11-24 12:31:14.815112952 +0000 UTC m=+207.172627094" Nov 24 12:31:14 crc kubenswrapper[4756]: I1124 12:31:14.831282 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbh8f" podStartSLOduration=3.346220711 podStartE2EDuration="5.831256693s" podCreationTimestamp="2025-11-24 12:31:09 +0000 UTC" firstStartedPulling="2025-11-24 12:31:10.744263392 +0000 UTC m=+203.101777544" lastFinishedPulling="2025-11-24 12:31:13.229299384 +0000 UTC m=+205.586813526" observedRunningTime="2025-11-24 12:31:14.827469965 +0000 UTC m=+207.184984117" watchObservedRunningTime="2025-11-24 12:31:14.831256693 +0000 UTC m=+207.188770835" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.160489 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.162434 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.217435 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.373220 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.373269 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.408925 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.857055 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l4qxn" Nov 24 12:31:17 crc kubenswrapper[4756]: I1124 12:31:17.857297 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmncc" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.560460 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.560786 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.781700 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.781762 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.831605 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:19 crc kubenswrapper[4756]: I1124 12:31:19.874203 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:31:20 crc kubenswrapper[4756]: I1124 12:31:20.607501 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbh8f" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="registry-server" probeResult="failure" output=< Nov 24 12:31:20 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:31:20 crc kubenswrapper[4756]: > Nov 24 12:31:29 crc kubenswrapper[4756]: I1124 12:31:29.610445 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:29 crc kubenswrapper[4756]: I1124 12:31:29.657709 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.479641 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.480776 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.480903 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.481750 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.481920 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874" gracePeriod=600 Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.912824 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874" exitCode=0 Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.912919 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874"} Nov 24 12:31:33 crc kubenswrapper[4756]: I1124 12:31:33.913122 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2"} Nov 24 12:33:33 crc kubenswrapper[4756]: I1124 12:33:33.479770 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:33:33 crc kubenswrapper[4756]: I1124 12:33:33.481407 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:34:03 crc kubenswrapper[4756]: I1124 12:34:03.479398 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:34:03 crc kubenswrapper[4756]: I1124 12:34:03.480050 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:34:33 crc kubenswrapper[4756]: I1124 12:34:33.479606 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:34:33 crc kubenswrapper[4756]: I1124 12:34:33.480454 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:34:33 crc kubenswrapper[4756]: I1124 12:34:33.480513 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:34:33 crc kubenswrapper[4756]: I1124 12:34:33.481239 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:34:33 crc kubenswrapper[4756]: I1124 12:34:33.481358 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2" gracePeriod=600 Nov 24 12:34:34 crc kubenswrapper[4756]: I1124 12:34:34.029976 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2" exitCode=0 Nov 24 12:34:34 crc kubenswrapper[4756]: I1124 12:34:34.030029 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2"} Nov 24 12:34:34 crc kubenswrapper[4756]: I1124 12:34:34.031035 4756 scope.go:117] "RemoveContainer" containerID="18d8e56c608685e778eab0b76fd45d35fe83d1e6bcbc388b06ca0b77ba191874" Nov 24 12:34:34 crc kubenswrapper[4756]: I1124 12:34:34.031768 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b"} Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.344627 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xhmpv"] Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.345952 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.364652 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xhmpv"] Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.480554 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-registry-certificates\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.480620 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-bound-sa-token\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.480644 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7ssf\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-kube-api-access-z7ssf\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.480850 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-registry-tls\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.480996 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a412edf-225e-4686-b53f-4856cc1af502-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.481086 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a412edf-225e-4686-b53f-4856cc1af502-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.481138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.481233 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-trusted-ca\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.504483 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582166 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-registry-tls\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582235 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a412edf-225e-4686-b53f-4856cc1af502-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582267 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a412edf-225e-4686-b53f-4856cc1af502-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582300 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-trusted-ca\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582329 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-registry-certificates\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582361 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-bound-sa-token\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.582391 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7ssf\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-kube-api-access-z7ssf\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.583764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7a412edf-225e-4686-b53f-4856cc1af502-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.583833 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-trusted-ca\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.583861 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7a412edf-225e-4686-b53f-4856cc1af502-registry-certificates\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.588637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-registry-tls\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.588923 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7a412edf-225e-4686-b53f-4856cc1af502-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.600750 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-bound-sa-token\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.608035 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7ssf\" (UniqueName: \"kubernetes.io/projected/7a412edf-225e-4686-b53f-4856cc1af502-kube-api-access-z7ssf\") pod \"image-registry-66df7c8f76-xhmpv\" (UID: \"7a412edf-225e-4686-b53f-4856cc1af502\") " pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:57 crc kubenswrapper[4756]: I1124 12:34:57.661865 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:58 crc kubenswrapper[4756]: I1124 12:34:58.078671 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xhmpv"] Nov 24 12:34:58 crc kubenswrapper[4756]: I1124 12:34:58.204999 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" event={"ID":"7a412edf-225e-4686-b53f-4856cc1af502","Type":"ContainerStarted","Data":"59136fe0033525b37bdc22824bc4f153bd88fc24fbb1eeebf5ccbd39b5a49a53"} Nov 24 12:34:59 crc kubenswrapper[4756]: I1124 12:34:59.211441 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" event={"ID":"7a412edf-225e-4686-b53f-4856cc1af502","Type":"ContainerStarted","Data":"d0127a12909b1f47991e9c2a33dfd8fa6542cce566a62d516d57b1126b05e842"} Nov 24 12:34:59 crc kubenswrapper[4756]: I1124 12:34:59.211653 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:34:59 crc kubenswrapper[4756]: I1124 12:34:59.235283 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" podStartSLOduration=2.235263293 podStartE2EDuration="2.235263293s" podCreationTimestamp="2025-11-24 12:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:34:59.232706029 +0000 UTC m=+431.590220181" watchObservedRunningTime="2025-11-24 12:34:59.235263293 +0000 UTC m=+431.592777435" Nov 24 12:35:17 crc kubenswrapper[4756]: I1124 12:35:17.668892 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xhmpv" Nov 24 12:35:17 crc kubenswrapper[4756]: I1124 12:35:17.734986 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:35:42 crc kubenswrapper[4756]: I1124 12:35:42.774436 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" podUID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" containerName="registry" containerID="cri-o://db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0" gracePeriod=30 Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.161624 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313397 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt4d6\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313531 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313608 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313829 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313916 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313959 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.313998 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.314027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token\") pod \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\" (UID: \"c4943ec6-a5a3-4e97-9073-cf59209bfbf3\") " Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.315145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.315364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.323859 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6" (OuterVolumeSpecName: "kube-api-access-dt4d6") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "kube-api-access-dt4d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.325033 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.325047 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.329755 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.330484 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.333370 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c4943ec6-a5a3-4e97-9073-cf59209bfbf3" (UID: "c4943ec6-a5a3-4e97-9073-cf59209bfbf3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416028 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416368 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416486 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416547 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416601 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416650 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.416698 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt4d6\" (UniqueName: \"kubernetes.io/projected/c4943ec6-a5a3-4e97-9073-cf59209bfbf3-kube-api-access-dt4d6\") on node \"crc\" DevicePath \"\"" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.484875 4756 generic.go:334] "Generic (PLEG): container finished" podID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" containerID="db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0" exitCode=0 Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.484917 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" event={"ID":"c4943ec6-a5a3-4e97-9073-cf59209bfbf3","Type":"ContainerDied","Data":"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0"} Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.484950 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" event={"ID":"c4943ec6-a5a3-4e97-9073-cf59209bfbf3","Type":"ContainerDied","Data":"97bfa5ca47de23a90df84d0596eebf60d94356a5b183d32888a74972af3b21a1"} Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.484953 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p5f6n" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.484973 4756 scope.go:117] "RemoveContainer" containerID="db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.505354 4756 scope.go:117] "RemoveContainer" containerID="db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0" Nov 24 12:35:43 crc kubenswrapper[4756]: E1124 12:35:43.505751 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0\": container with ID starting with db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0 not found: ID does not exist" containerID="db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.505805 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0"} err="failed to get container status \"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0\": rpc error: code = NotFound desc = could not find container \"db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0\": container with ID starting with db5d9d94bddb274e0f305d57766b49fb65f3b74577cc312af6e035c505fdc1d0 not found: ID does not exist" Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.536246 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:35:43 crc kubenswrapper[4756]: I1124 12:35:43.543205 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p5f6n"] Nov 24 12:35:44 crc kubenswrapper[4756]: I1124 12:35:44.488610 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" path="/var/lib/kubelet/pods/c4943ec6-a5a3-4e97-9073-cf59209bfbf3/volumes" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.856716 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4dcdb"] Nov 24 12:36:31 crc kubenswrapper[4756]: E1124 12:36:31.857784 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" containerName="registry" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.857813 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" containerName="registry" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.857909 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4943ec6-a5a3-4e97-9073-cf59209bfbf3" containerName="registry" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.858383 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.863203 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-v72fq"] Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.864323 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-v72fq" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.864615 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.864827 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.871422 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qghb4" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.871765 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-68fbd" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.878902 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4dcdb"] Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.895684 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-v72fq"] Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.903447 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4xzdf"] Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.904187 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.913149 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r425t" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.920474 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnwq\" (UniqueName: \"kubernetes.io/projected/f2d8ba73-901c-4245-bb1f-37c63a3b7232-kube-api-access-qfnwq\") pod \"cert-manager-webhook-5655c58dd6-4xzdf\" (UID: \"f2d8ba73-901c-4245-bb1f-37c63a3b7232\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.920560 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ccw\" (UniqueName: \"kubernetes.io/projected/029ff7e6-28c1-4bf7-9b5b-575230d5ed04-kube-api-access-57ccw\") pod \"cert-manager-cainjector-7f985d654d-4dcdb\" (UID: \"029ff7e6-28c1-4bf7-9b5b-575230d5ed04\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.920611 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6pp\" (UniqueName: \"kubernetes.io/projected/50967785-12d1-45d3-b9e1-03c7dcb00af4-kube-api-access-hq6pp\") pod \"cert-manager-5b446d88c5-v72fq\" (UID: \"50967785-12d1-45d3-b9e1-03c7dcb00af4\") " pod="cert-manager/cert-manager-5b446d88c5-v72fq" Nov 24 12:36:31 crc kubenswrapper[4756]: I1124 12:36:31.927995 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4xzdf"] Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.022488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ccw\" (UniqueName: \"kubernetes.io/projected/029ff7e6-28c1-4bf7-9b5b-575230d5ed04-kube-api-access-57ccw\") pod \"cert-manager-cainjector-7f985d654d-4dcdb\" (UID: \"029ff7e6-28c1-4bf7-9b5b-575230d5ed04\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.022561 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq6pp\" (UniqueName: \"kubernetes.io/projected/50967785-12d1-45d3-b9e1-03c7dcb00af4-kube-api-access-hq6pp\") pod \"cert-manager-5b446d88c5-v72fq\" (UID: \"50967785-12d1-45d3-b9e1-03c7dcb00af4\") " pod="cert-manager/cert-manager-5b446d88c5-v72fq" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.022590 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnwq\" (UniqueName: \"kubernetes.io/projected/f2d8ba73-901c-4245-bb1f-37c63a3b7232-kube-api-access-qfnwq\") pod \"cert-manager-webhook-5655c58dd6-4xzdf\" (UID: \"f2d8ba73-901c-4245-bb1f-37c63a3b7232\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.043868 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq6pp\" (UniqueName: \"kubernetes.io/projected/50967785-12d1-45d3-b9e1-03c7dcb00af4-kube-api-access-hq6pp\") pod \"cert-manager-5b446d88c5-v72fq\" (UID: \"50967785-12d1-45d3-b9e1-03c7dcb00af4\") " pod="cert-manager/cert-manager-5b446d88c5-v72fq" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.048127 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnwq\" (UniqueName: \"kubernetes.io/projected/f2d8ba73-901c-4245-bb1f-37c63a3b7232-kube-api-access-qfnwq\") pod \"cert-manager-webhook-5655c58dd6-4xzdf\" (UID: \"f2d8ba73-901c-4245-bb1f-37c63a3b7232\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.048682 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ccw\" (UniqueName: \"kubernetes.io/projected/029ff7e6-28c1-4bf7-9b5b-575230d5ed04-kube-api-access-57ccw\") pod \"cert-manager-cainjector-7f985d654d-4dcdb\" (UID: \"029ff7e6-28c1-4bf7-9b5b-575230d5ed04\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.185883 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.194235 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-v72fq" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.219800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.416985 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-v72fq"] Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.427265 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.456470 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4dcdb"] Nov 24 12:36:32 crc kubenswrapper[4756]: W1124 12:36:32.467287 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod029ff7e6_28c1_4bf7_9b5b_575230d5ed04.slice/crio-c183b1025530ac54195f7f3820d71f9f3b87825df9da0e48a42661ac59d5c694 WatchSource:0}: Error finding container c183b1025530ac54195f7f3820d71f9f3b87825df9da0e48a42661ac59d5c694: Status 404 returned error can't find the container with id c183b1025530ac54195f7f3820d71f9f3b87825df9da0e48a42661ac59d5c694 Nov 24 12:36:32 crc kubenswrapper[4756]: W1124 12:36:32.508777 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d8ba73_901c_4245_bb1f_37c63a3b7232.slice/crio-d43674166b3b40038e417e74eb4c985bcb96903a553fe0b431203e4c4056e32d WatchSource:0}: Error finding container d43674166b3b40038e417e74eb4c985bcb96903a553fe0b431203e4c4056e32d: Status 404 returned error can't find the container with id d43674166b3b40038e417e74eb4c985bcb96903a553fe0b431203e4c4056e32d Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.513868 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4xzdf"] Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.820684 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" event={"ID":"f2d8ba73-901c-4245-bb1f-37c63a3b7232","Type":"ContainerStarted","Data":"d43674166b3b40038e417e74eb4c985bcb96903a553fe0b431203e4c4056e32d"} Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.822682 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-v72fq" event={"ID":"50967785-12d1-45d3-b9e1-03c7dcb00af4","Type":"ContainerStarted","Data":"ef897b140c58e28fb7efc03c51b24991e405304b5d0a02c2dfffb7838d0c81d8"} Nov 24 12:36:32 crc kubenswrapper[4756]: I1124 12:36:32.824562 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" event={"ID":"029ff7e6-28c1-4bf7-9b5b-575230d5ed04","Type":"ContainerStarted","Data":"c183b1025530ac54195f7f3820d71f9f3b87825df9da0e48a42661ac59d5c694"} Nov 24 12:36:33 crc kubenswrapper[4756]: I1124 12:36:33.479470 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:36:33 crc kubenswrapper[4756]: I1124 12:36:33.479557 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.863119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" event={"ID":"029ff7e6-28c1-4bf7-9b5b-575230d5ed04","Type":"ContainerStarted","Data":"9494a950aa815f98a6c4920d2ca32cb28c765e0d543824e929b80535fee07b28"} Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.867021 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" event={"ID":"f2d8ba73-901c-4245-bb1f-37c63a3b7232","Type":"ContainerStarted","Data":"f30d169d7f2bdab8a029d48abd80a422fd852b467120f419016c636d1301259e"} Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.869049 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-v72fq" event={"ID":"50967785-12d1-45d3-b9e1-03c7dcb00af4","Type":"ContainerStarted","Data":"e779fcfecf20de64eccfd1029ba7f6c48b5a183624643ddeccd373adcf14e2c8"} Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.892875 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-4dcdb" podStartSLOduration=2.080207371 podStartE2EDuration="5.892840083s" podCreationTimestamp="2025-11-24 12:36:31 +0000 UTC" firstStartedPulling="2025-11-24 12:36:32.470258746 +0000 UTC m=+524.827772888" lastFinishedPulling="2025-11-24 12:36:36.282891468 +0000 UTC m=+528.640405600" observedRunningTime="2025-11-24 12:36:36.889090607 +0000 UTC m=+529.246604759" watchObservedRunningTime="2025-11-24 12:36:36.892840083 +0000 UTC m=+529.250354215" Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.909952 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-v72fq" podStartSLOduration=2.057891831 podStartE2EDuration="5.909924265s" podCreationTimestamp="2025-11-24 12:36:31 +0000 UTC" firstStartedPulling="2025-11-24 12:36:32.427027296 +0000 UTC m=+524.784541438" lastFinishedPulling="2025-11-24 12:36:36.27905971 +0000 UTC m=+528.636573872" observedRunningTime="2025-11-24 12:36:36.903747931 +0000 UTC m=+529.261262113" watchObservedRunningTime="2025-11-24 12:36:36.909924265 +0000 UTC m=+529.267438447" Nov 24 12:36:36 crc kubenswrapper[4756]: I1124 12:36:36.920893 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" podStartSLOduration=2.151874654 podStartE2EDuration="5.920867434s" podCreationTimestamp="2025-11-24 12:36:31 +0000 UTC" firstStartedPulling="2025-11-24 12:36:32.520593727 +0000 UTC m=+524.878107869" lastFinishedPulling="2025-11-24 12:36:36.289586507 +0000 UTC m=+528.647100649" observedRunningTime="2025-11-24 12:36:36.918797886 +0000 UTC m=+529.276312038" watchObservedRunningTime="2025-11-24 12:36:36.920867434 +0000 UTC m=+529.278381576" Nov 24 12:36:37 crc kubenswrapper[4756]: I1124 12:36:37.220302 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.173726 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hnsz7"] Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.175937 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-controller" containerID="cri-o://750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.176703 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="sbdb" containerID="cri-o://8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.176800 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="nbdb" containerID="cri-o://a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.176870 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="northd" containerID="cri-o://9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.176930 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.176989 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-node" containerID="cri-o://46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.177049 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-acl-logging" containerID="cri-o://3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.230121 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-4xzdf" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.268530 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" containerID="cri-o://c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" gracePeriod=30 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.589884 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/2.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.592527 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovn-acl-logging/0.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.593099 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovn-controller/0.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.593658 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.648763 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mpncb"] Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649029 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649044 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649058 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649064 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649077 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="northd" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649084 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="northd" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649094 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649106 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649116 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-node" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649121 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-node" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649130 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649137 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649145 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kubecfg-setup" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649154 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kubecfg-setup" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649313 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649322 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649330 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-acl-logging" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649336 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-acl-logging" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649346 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="nbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649353 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="nbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649365 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="sbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649372 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="sbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649461 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649472 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="sbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649481 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649491 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649500 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="nbdb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649511 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649518 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649526 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovn-acl-logging" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649533 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="kube-rbac-proxy-node" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649540 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="northd" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.649627 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649635 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.649728 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerName="ovnkube-controller" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.652071 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678130 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zw8x\" (UniqueName: \"kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678154 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678218 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678252 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678229 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678277 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678316 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678343 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678367 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678428 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678448 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678467 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678503 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678526 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678545 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678575 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678620 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678655 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678676 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch\") pod \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\" (UID: \"60bc5508-89b8-4cc3-a0d6-e30abed70f05\") " Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678739 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678775 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678798 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678855 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-slash\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678924 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678943 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678960 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-node-log\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.678978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-etc-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679001 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-kubelet\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679020 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-bin\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679037 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqrt\" (UniqueName: \"kubernetes.io/projected/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-kube-api-access-ljqrt\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679062 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovn-node-metrics-cert\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679126 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-ovn\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679149 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-netd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679194 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-netns\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679212 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679283 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679304 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash" (OuterVolumeSpecName: "host-slash") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679352 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679341 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket" (OuterVolumeSpecName: "log-socket") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679382 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679417 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679437 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679469 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log" (OuterVolumeSpecName: "node-log") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679474 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-script-lib\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679516 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-log-socket\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679496 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679648 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-env-overrides\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679710 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679805 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-config\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679828 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679934 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-var-lib-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.679978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-systemd-units\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680048 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-systemd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680186 4756 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680203 4756 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680214 4756 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680225 4756 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680236 4756 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680245 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680254 4756 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680265 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680277 4756 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680286 4756 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680295 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680304 4756 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680314 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680322 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680333 4756 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680354 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.680385 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60bc5508-89b8-4cc3-a0d6-e30abed70f05-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.686851 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x" (OuterVolumeSpecName: "kube-api-access-8zw8x") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "kube-api-access-8zw8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.686899 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.695997 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "60bc5508-89b8-4cc3-a0d6-e30abed70f05" (UID: "60bc5508-89b8-4cc3-a0d6-e30abed70f05"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782184 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-systemd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782282 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-slash\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-slash\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-systemd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782357 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782417 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782580 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-node-log\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782623 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-etc-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-kubelet\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782701 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-bin\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-node-log\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782719 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljqrt\" (UniqueName: \"kubernetes.io/projected/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-kube-api-access-ljqrt\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782817 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovn-node-metrics-cert\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-ovn\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782963 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-netd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.782990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-netns\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783008 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-script-lib\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783024 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-log-socket\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783053 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-env-overrides\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783072 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-etc-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783088 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-config\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-kubelet\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783140 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-run-netns\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783146 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-bin\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783461 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-run-ovn\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783497 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783499 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-log-socket\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783540 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-var-lib-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784321 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-systemd-units\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784834 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60bc5508-89b8-4cc3-a0d6-e30abed70f05-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784886 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zw8x\" (UniqueName: \"kubernetes.io/projected/60bc5508-89b8-4cc3-a0d6-e30abed70f05-kube-api-access-8zw8x\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784901 4756 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60bc5508-89b8-4cc3-a0d6-e30abed70f05-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783567 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-host-cni-netd\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-config\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784417 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-env-overrides\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784391 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovnkube-script-lib\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.784423 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-systemd-units\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.783616 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-var-lib-openvswitch\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.787084 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-ovn-node-metrics-cert\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.797720 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljqrt\" (UniqueName: \"kubernetes.io/projected/4af8e4d5-bb57-46ee-96b2-a5ec680ca10e-kube-api-access-ljqrt\") pod \"ovnkube-node-mpncb\" (UID: \"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.942949 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovnkube-controller/2.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.947445 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovn-acl-logging/0.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948292 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hnsz7_60bc5508-89b8-4cc3-a0d6-e30abed70f05/ovn-controller/0.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948755 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948782 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948791 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948799 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948806 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948816 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" exitCode=0 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948826 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" exitCode=143 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948813 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948894 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948834 4756 generic.go:334] "Generic (PLEG): container finished" podID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" exitCode=143 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948934 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948944 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948984 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.948963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949206 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949243 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949264 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949272 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949279 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949286 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949293 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949301 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949307 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949314 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949324 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949339 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949346 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949352 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949357 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949362 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949367 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949373 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949378 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949383 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949388 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949404 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949414 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949420 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949425 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949430 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949435 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949442 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949447 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949452 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949458 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949466 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hnsz7" event={"ID":"60bc5508-89b8-4cc3-a0d6-e30abed70f05","Type":"ContainerDied","Data":"7aab239b9fa2235f71e0cbe265d27742ad38c7d617b5a69f8ebd8883a162a125"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949473 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949481 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949486 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949492 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949498 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949503 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949512 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949516 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949522 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.949527 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.955574 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/1.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.956121 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/0.log" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.956200 4756 generic.go:334] "Generic (PLEG): container finished" podID="077d4abb-b72e-499f-98c2-628720d701dc" containerID="a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17" exitCode=2 Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.956235 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerDied","Data":"a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.956260 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf"} Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.956967 4756 scope.go:117] "RemoveContainer" containerID="a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17" Nov 24 12:36:42 crc kubenswrapper[4756]: E1124 12:36:42.957239 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-66bwb_openshift-multus(077d4abb-b72e-499f-98c2-628720d701dc)\"" pod="openshift-multus/multus-66bwb" podUID="077d4abb-b72e-499f-98c2-628720d701dc" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.968832 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:42 crc kubenswrapper[4756]: I1124 12:36:42.976534 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.004571 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hnsz7"] Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.008324 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hnsz7"] Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.021273 4756 scope.go:117] "RemoveContainer" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.037840 4756 scope.go:117] "RemoveContainer" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.061344 4756 scope.go:117] "RemoveContainer" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.077399 4756 scope.go:117] "RemoveContainer" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.101181 4756 scope.go:117] "RemoveContainer" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.133791 4756 scope.go:117] "RemoveContainer" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.156396 4756 scope.go:117] "RemoveContainer" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.178982 4756 scope.go:117] "RemoveContainer" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.250087 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.250639 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.250689 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} err="failed to get container status \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.250720 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.251411 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": container with ID starting with 2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474 not found: ID does not exist" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.251438 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} err="failed to get container status \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": rpc error: code = NotFound desc = could not find container \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": container with ID starting with 2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.251458 4756 scope.go:117] "RemoveContainer" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.252054 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": container with ID starting with 8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3 not found: ID does not exist" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.252112 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} err="failed to get container status \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": rpc error: code = NotFound desc = could not find container \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": container with ID starting with 8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.252152 4756 scope.go:117] "RemoveContainer" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.253105 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": container with ID starting with a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad not found: ID does not exist" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.253138 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} err="failed to get container status \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": rpc error: code = NotFound desc = could not find container \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": container with ID starting with a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.253179 4756 scope.go:117] "RemoveContainer" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.253568 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": container with ID starting with 9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5 not found: ID does not exist" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.253656 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} err="failed to get container status \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": rpc error: code = NotFound desc = could not find container \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": container with ID starting with 9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.253683 4756 scope.go:117] "RemoveContainer" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.254069 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": container with ID starting with 000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05 not found: ID does not exist" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.254120 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} err="failed to get container status \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": rpc error: code = NotFound desc = could not find container \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": container with ID starting with 000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.254215 4756 scope.go:117] "RemoveContainer" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.254666 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": container with ID starting with 46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed not found: ID does not exist" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.254757 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} err="failed to get container status \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": rpc error: code = NotFound desc = could not find container \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": container with ID starting with 46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.254819 4756 scope.go:117] "RemoveContainer" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.255079 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": container with ID starting with 3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3 not found: ID does not exist" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255107 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} err="failed to get container status \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": rpc error: code = NotFound desc = could not find container \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": container with ID starting with 3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255123 4756 scope.go:117] "RemoveContainer" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.255379 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": container with ID starting with 750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280 not found: ID does not exist" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255406 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} err="failed to get container status \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": rpc error: code = NotFound desc = could not find container \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": container with ID starting with 750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255423 4756 scope.go:117] "RemoveContainer" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: E1124 12:36:43.255609 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": container with ID starting with 7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa not found: ID does not exist" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255646 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} err="failed to get container status \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": rpc error: code = NotFound desc = could not find container \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": container with ID starting with 7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255668 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255855 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} err="failed to get container status \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.255891 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.256119 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} err="failed to get container status \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": rpc error: code = NotFound desc = could not find container \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": container with ID starting with 2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.256145 4756 scope.go:117] "RemoveContainer" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.256630 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} err="failed to get container status \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": rpc error: code = NotFound desc = could not find container \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": container with ID starting with 8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.256679 4756 scope.go:117] "RemoveContainer" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.257032 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} err="failed to get container status \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": rpc error: code = NotFound desc = could not find container \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": container with ID starting with a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.257072 4756 scope.go:117] "RemoveContainer" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.257833 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} err="failed to get container status \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": rpc error: code = NotFound desc = could not find container \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": container with ID starting with 9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.257867 4756 scope.go:117] "RemoveContainer" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.258242 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} err="failed to get container status \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": rpc error: code = NotFound desc = could not find container \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": container with ID starting with 000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.258273 4756 scope.go:117] "RemoveContainer" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.258817 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} err="failed to get container status \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": rpc error: code = NotFound desc = could not find container \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": container with ID starting with 46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.258839 4756 scope.go:117] "RemoveContainer" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.259184 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} err="failed to get container status \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": rpc error: code = NotFound desc = could not find container \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": container with ID starting with 3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.259207 4756 scope.go:117] "RemoveContainer" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.259506 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} err="failed to get container status \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": rpc error: code = NotFound desc = could not find container \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": container with ID starting with 750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.259573 4756 scope.go:117] "RemoveContainer" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.260422 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} err="failed to get container status \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": rpc error: code = NotFound desc = could not find container \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": container with ID starting with 7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.260457 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.260783 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} err="failed to get container status \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.260806 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.261226 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} err="failed to get container status \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": rpc error: code = NotFound desc = could not find container \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": container with ID starting with 2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.261256 4756 scope.go:117] "RemoveContainer" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.261561 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} err="failed to get container status \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": rpc error: code = NotFound desc = could not find container \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": container with ID starting with 8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.261609 4756 scope.go:117] "RemoveContainer" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.262197 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} err="failed to get container status \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": rpc error: code = NotFound desc = could not find container \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": container with ID starting with a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.262227 4756 scope.go:117] "RemoveContainer" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.262476 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} err="failed to get container status \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": rpc error: code = NotFound desc = could not find container \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": container with ID starting with 9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.262499 4756 scope.go:117] "RemoveContainer" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.263281 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} err="failed to get container status \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": rpc error: code = NotFound desc = could not find container \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": container with ID starting with 000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.263331 4756 scope.go:117] "RemoveContainer" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.263809 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} err="failed to get container status \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": rpc error: code = NotFound desc = could not find container \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": container with ID starting with 46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.263849 4756 scope.go:117] "RemoveContainer" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.264264 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} err="failed to get container status \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": rpc error: code = NotFound desc = could not find container \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": container with ID starting with 3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.264303 4756 scope.go:117] "RemoveContainer" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.264719 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} err="failed to get container status \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": rpc error: code = NotFound desc = could not find container \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": container with ID starting with 750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.264758 4756 scope.go:117] "RemoveContainer" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.265044 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} err="failed to get container status \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": rpc error: code = NotFound desc = could not find container \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": container with ID starting with 7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.265085 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.265916 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} err="failed to get container status \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.265958 4756 scope.go:117] "RemoveContainer" containerID="2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.266405 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474"} err="failed to get container status \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": rpc error: code = NotFound desc = could not find container \"2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474\": container with ID starting with 2cfc38c1e6b3e18078691ca9018202d3b467ab334da551c9ac495766ec10e474 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.266449 4756 scope.go:117] "RemoveContainer" containerID="8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.266883 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3"} err="failed to get container status \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": rpc error: code = NotFound desc = could not find container \"8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3\": container with ID starting with 8464c7084f6c1adb29b36234551a86e57c84a4e7d81b317424f7a86167213db3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.266910 4756 scope.go:117] "RemoveContainer" containerID="a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.267297 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad"} err="failed to get container status \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": rpc error: code = NotFound desc = could not find container \"a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad\": container with ID starting with a0f11ad646d80b94e584c39f6486c14e3cf0bc22384b3b077584417521f098ad not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.267355 4756 scope.go:117] "RemoveContainer" containerID="9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.268246 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5"} err="failed to get container status \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": rpc error: code = NotFound desc = could not find container \"9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5\": container with ID starting with 9ecd59674e10a21b1847ee06e12577e84a2365cd1384a0fc47c3c1f3f4136de5 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.268271 4756 scope.go:117] "RemoveContainer" containerID="000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.268639 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05"} err="failed to get container status \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": rpc error: code = NotFound desc = could not find container \"000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05\": container with ID starting with 000723f981b592fb204b0b751714bc980912aae53924cb8e543d8a56dc261e05 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.268680 4756 scope.go:117] "RemoveContainer" containerID="46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.269023 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed"} err="failed to get container status \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": rpc error: code = NotFound desc = could not find container \"46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed\": container with ID starting with 46788b584569948b897c49550e94fa59c8031e7ea9638941eea13ff54f322eed not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.269066 4756 scope.go:117] "RemoveContainer" containerID="3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.269906 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3"} err="failed to get container status \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": rpc error: code = NotFound desc = could not find container \"3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3\": container with ID starting with 3ea109e6865837f76890c6bab23eb3bc243250c8d0c20cc269ca169ab2b163e3 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.269946 4756 scope.go:117] "RemoveContainer" containerID="750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.270417 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280"} err="failed to get container status \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": rpc error: code = NotFound desc = could not find container \"750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280\": container with ID starting with 750c53b00b1a9dae9bf4085b65946e3965f204570c1a22126660ae5228d3b280 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.270459 4756 scope.go:117] "RemoveContainer" containerID="7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.270887 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa"} err="failed to get container status \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": rpc error: code = NotFound desc = could not find container \"7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa\": container with ID starting with 7add6a1ed5e874e0c20cf542536eacd0a2f4e4dbe253cce15d37279736cb73fa not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.270926 4756 scope.go:117] "RemoveContainer" containerID="c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.272503 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47"} err="failed to get container status \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": rpc error: code = NotFound desc = could not find container \"c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47\": container with ID starting with c0710a922d3f2f6c5d7e3fe3054518ff1fd8708c792bc71e72891d017ac99d47 not found: ID does not exist" Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.967623 4756 generic.go:334] "Generic (PLEG): container finished" podID="4af8e4d5-bb57-46ee-96b2-a5ec680ca10e" containerID="706b3dc737428ec70652ab0e8f23921833cf8d7abe06576d724c09ece2c8bd89" exitCode=0 Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.967719 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerDied","Data":"706b3dc737428ec70652ab0e8f23921833cf8d7abe06576d724c09ece2c8bd89"} Nov 24 12:36:43 crc kubenswrapper[4756]: I1124 12:36:43.967764 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"a85ef961e6b47aad2fb3ae9001d0576a305586e7b1c7d2292e753c4b8e8bd856"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.485052 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bc5508-89b8-4cc3-a0d6-e30abed70f05" path="/var/lib/kubelet/pods/60bc5508-89b8-4cc3-a0d6-e30abed70f05/volumes" Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.979503 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"982dbfc55ee67abc4fb2d0105c7e8311b86b450d7194e568a1440bb865a2eeb4"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.979975 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"15dd01f97cb3d290b5f0b93af8cdcec69b387e098637af25f0f374002859e994"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.979991 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"bdc21139c179f737b25f3543eb7e0d83785803fa199bc6efe0eae1b5fc94a156"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.980002 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"b0d5b336ea708ac7ada9f84e882fed6ff3c2e385a94f2bb94c0f3b70b45d99ea"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.980019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"6a2091a23c7129df6e0e3d99109798a311a88b7fe0200a88d7a5c3980ff1d2b0"} Nov 24 12:36:44 crc kubenswrapper[4756]: I1124 12:36:44.980031 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"0eb21e9d2f594ef44425a873b1774a5e5ef40190992ed922d60f9fd880e1f373"} Nov 24 12:36:48 crc kubenswrapper[4756]: I1124 12:36:48.004535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"17109c84f5b025d262f431ba8af8bc6b12ff8cf58d4d029cfdaab6c7717e726e"} Nov 24 12:36:48 crc kubenswrapper[4756]: I1124 12:36:48.688949 4756 scope.go:117] "RemoveContainer" containerID="e1ea4fc4e506bf3465c9a517ac3625a35532f9c23f76cc6db03353424da183cf" Nov 24 12:36:49 crc kubenswrapper[4756]: I1124 12:36:49.014470 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/1.log" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.027957 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" event={"ID":"4af8e4d5-bb57-46ee-96b2-a5ec680ca10e","Type":"ContainerStarted","Data":"27da77e56b5d9e2b6e7245dfecd5489565f1cb41328a8afda80d93a5d64ebea0"} Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.028418 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.028464 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.028607 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.058455 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.066681 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" podStartSLOduration=8.066651495 podStartE2EDuration="8.066651495s" podCreationTimestamp="2025-11-24 12:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:36:50.06187025 +0000 UTC m=+542.419384392" watchObservedRunningTime="2025-11-24 12:36:50.066651495 +0000 UTC m=+542.424165647" Nov 24 12:36:50 crc kubenswrapper[4756]: I1124 12:36:50.071191 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:36:58 crc kubenswrapper[4756]: I1124 12:36:58.479307 4756 scope.go:117] "RemoveContainer" containerID="a11a2d7708b797f4b4938bfdb18ee927433d3844be3300a7087fda27661b4d17" Nov 24 12:36:59 crc kubenswrapper[4756]: I1124 12:36:59.091789 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-66bwb_077d4abb-b72e-499f-98c2-628720d701dc/kube-multus/1.log" Nov 24 12:36:59 crc kubenswrapper[4756]: I1124 12:36:59.092246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-66bwb" event={"ID":"077d4abb-b72e-499f-98c2-628720d701dc","Type":"ContainerStarted","Data":"b886863723105b9e5927ece3901f93350d52eea41ca12bcc1c764dc8706e0b3a"} Nov 24 12:37:03 crc kubenswrapper[4756]: I1124 12:37:03.479840 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:37:03 crc kubenswrapper[4756]: I1124 12:37:03.480397 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:37:12 crc kubenswrapper[4756]: I1124 12:37:12.961633 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62"] Nov 24 12:37:12 crc kubenswrapper[4756]: I1124 12:37:12.964833 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:12 crc kubenswrapper[4756]: I1124 12:37:12.970295 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:37:12 crc kubenswrapper[4756]: I1124 12:37:12.974413 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62"] Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.003625 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpncb" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.029784 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.029865 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fx7\" (UniqueName: \"kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.029927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.130917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.131023 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.131078 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fx7\" (UniqueName: \"kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.131636 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.132131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.156764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fx7\" (UniqueName: \"kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.294340 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:13 crc kubenswrapper[4756]: I1124 12:37:13.479359 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62"] Nov 24 12:37:14 crc kubenswrapper[4756]: I1124 12:37:14.186595 4756 generic.go:334] "Generic (PLEG): container finished" podID="851eb838-a360-48b4-a06e-85f114507ab6" containerID="89bb22a1ba4ddc880fe3718490f2a291686e6beb122f4fbd04a0c9ded178406b" exitCode=0 Nov 24 12:37:14 crc kubenswrapper[4756]: I1124 12:37:14.186653 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" event={"ID":"851eb838-a360-48b4-a06e-85f114507ab6","Type":"ContainerDied","Data":"89bb22a1ba4ddc880fe3718490f2a291686e6beb122f4fbd04a0c9ded178406b"} Nov 24 12:37:14 crc kubenswrapper[4756]: I1124 12:37:14.186689 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" event={"ID":"851eb838-a360-48b4-a06e-85f114507ab6","Type":"ContainerStarted","Data":"7a6ed6498370d88d2811e7f952998264708c186e54372691b6b133fafc91ab2a"} Nov 24 12:37:16 crc kubenswrapper[4756]: I1124 12:37:16.203178 4756 generic.go:334] "Generic (PLEG): container finished" podID="851eb838-a360-48b4-a06e-85f114507ab6" containerID="0144fd1057eb4feeedd0eaeb152cb86e3ad9cf1e2a7f418edb52f1914078f041" exitCode=0 Nov 24 12:37:16 crc kubenswrapper[4756]: I1124 12:37:16.203271 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" event={"ID":"851eb838-a360-48b4-a06e-85f114507ab6","Type":"ContainerDied","Data":"0144fd1057eb4feeedd0eaeb152cb86e3ad9cf1e2a7f418edb52f1914078f041"} Nov 24 12:37:17 crc kubenswrapper[4756]: I1124 12:37:17.212053 4756 generic.go:334] "Generic (PLEG): container finished" podID="851eb838-a360-48b4-a06e-85f114507ab6" containerID="4c6692cffd1cb94da221bf088e1e4f9a0c26f18bd3191ee3e4aa46491dce8d81" exitCode=0 Nov 24 12:37:17 crc kubenswrapper[4756]: I1124 12:37:17.212140 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" event={"ID":"851eb838-a360-48b4-a06e-85f114507ab6","Type":"ContainerDied","Data":"4c6692cffd1cb94da221bf088e1e4f9a0c26f18bd3191ee3e4aa46491dce8d81"} Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.402133 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.501374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle\") pod \"851eb838-a360-48b4-a06e-85f114507ab6\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.501500 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fx7\" (UniqueName: \"kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7\") pod \"851eb838-a360-48b4-a06e-85f114507ab6\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.501558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util\") pod \"851eb838-a360-48b4-a06e-85f114507ab6\" (UID: \"851eb838-a360-48b4-a06e-85f114507ab6\") " Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.503795 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle" (OuterVolumeSpecName: "bundle") pod "851eb838-a360-48b4-a06e-85f114507ab6" (UID: "851eb838-a360-48b4-a06e-85f114507ab6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.508669 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7" (OuterVolumeSpecName: "kube-api-access-g5fx7") pod "851eb838-a360-48b4-a06e-85f114507ab6" (UID: "851eb838-a360-48b4-a06e-85f114507ab6"). InnerVolumeSpecName "kube-api-access-g5fx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.515754 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util" (OuterVolumeSpecName: "util") pod "851eb838-a360-48b4-a06e-85f114507ab6" (UID: "851eb838-a360-48b4-a06e-85f114507ab6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.603483 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.603544 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/851eb838-a360-48b4-a06e-85f114507ab6-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:18 crc kubenswrapper[4756]: I1124 12:37:18.603558 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fx7\" (UniqueName: \"kubernetes.io/projected/851eb838-a360-48b4-a06e-85f114507ab6-kube-api-access-g5fx7\") on node \"crc\" DevicePath \"\"" Nov 24 12:37:19 crc kubenswrapper[4756]: I1124 12:37:19.225255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" event={"ID":"851eb838-a360-48b4-a06e-85f114507ab6","Type":"ContainerDied","Data":"7a6ed6498370d88d2811e7f952998264708c186e54372691b6b133fafc91ab2a"} Nov 24 12:37:19 crc kubenswrapper[4756]: I1124 12:37:19.225347 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62" Nov 24 12:37:19 crc kubenswrapper[4756]: I1124 12:37:19.225354 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6ed6498370d88d2811e7f952998264708c186e54372691b6b133fafc91ab2a" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.023322 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk"] Nov 24 12:37:27 crc kubenswrapper[4756]: E1124 12:37:27.023961 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="extract" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.023975 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="extract" Nov 24 12:37:27 crc kubenswrapper[4756]: E1124 12:37:27.023993 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="util" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.024001 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="util" Nov 24 12:37:27 crc kubenswrapper[4756]: E1124 12:37:27.024012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="pull" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.024020 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="pull" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.024141 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="851eb838-a360-48b4-a06e-85f114507ab6" containerName="extract" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.024636 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" Nov 24 12:37:27 crc kubenswrapper[4756]: W1124 12:37:27.028036 4756 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-6nkbz": failed to list *v1.Secret: secrets "obo-prometheus-operator-dockercfg-6nkbz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Nov 24 12:37:27 crc kubenswrapper[4756]: E1124 12:37:27.028076 4756 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-6nkbz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-dockercfg-6nkbz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.028880 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.031721 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.045464 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.113535 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbkb\" (UniqueName: \"kubernetes.io/projected/0f42bf51-8a6c-4390-83a6-dbae6d26126a-kube-api-access-jxbkb\") pod \"obo-prometheus-operator-668cf9dfbb-gw4jk\" (UID: \"0f42bf51-8a6c-4390-83a6-dbae6d26126a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.215194 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.215367 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbkb\" (UniqueName: \"kubernetes.io/projected/0f42bf51-8a6c-4390-83a6-dbae6d26126a-kube-api-access-jxbkb\") pod \"obo-prometheus-operator-668cf9dfbb-gw4jk\" (UID: \"0f42bf51-8a6c-4390-83a6-dbae6d26126a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.216013 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.222758 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-9pld8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.223104 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.235958 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.243605 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.244608 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.261250 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbkb\" (UniqueName: \"kubernetes.io/projected/0f42bf51-8a6c-4390-83a6-dbae6d26126a-kube-api-access-jxbkb\") pod \"obo-prometheus-operator-668cf9dfbb-gw4jk\" (UID: \"0f42bf51-8a6c-4390-83a6-dbae6d26126a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.284198 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.317012 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.317298 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.317374 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.317472 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.395525 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qmk8j"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.396512 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.399137 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.399293 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6kfvn" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418607 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418675 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdbwf\" (UniqueName: \"kubernetes.io/projected/7ee15342-4efd-4e6b-8569-b54b26064eaf-kube-api-access-wdbwf\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418707 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418775 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee15342-4efd-4e6b-8569-b54b26064eaf-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.418801 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.428047 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.428783 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b44bcfa-0c82-4db2-b4e0-310a76be2b6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8\" (UID: \"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.436506 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qmk8j"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.441248 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.443931 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a8e934-b419-4e57-9311-7c8a34745da9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j\" (UID: \"d1a8e934-b419-4e57-9311-7c8a34745da9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.500308 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-p8ln7"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.501453 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.509045 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-m2x6w" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.519825 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdbwf\" (UniqueName: \"kubernetes.io/projected/7ee15342-4efd-4e6b-8569-b54b26064eaf-kube-api-access-wdbwf\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.519945 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee15342-4efd-4e6b-8569-b54b26064eaf-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.526848 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ee15342-4efd-4e6b-8569-b54b26064eaf-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.531452 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-p8ln7"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.540653 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.543210 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdbwf\" (UniqueName: \"kubernetes.io/projected/7ee15342-4efd-4e6b-8569-b54b26064eaf-kube-api-access-wdbwf\") pod \"observability-operator-d8bb48f5d-qmk8j\" (UID: \"7ee15342-4efd-4e6b-8569-b54b26064eaf\") " pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.595436 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.621275 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9g6r\" (UniqueName: \"kubernetes.io/projected/0055f07d-1546-45ad-b576-87d016490055-kube-api-access-m9g6r\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.621328 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0055f07d-1546-45ad-b576-87d016490055-openshift-service-ca\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.713535 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.724943 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9g6r\" (UniqueName: \"kubernetes.io/projected/0055f07d-1546-45ad-b576-87d016490055-kube-api-access-m9g6r\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.725005 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0055f07d-1546-45ad-b576-87d016490055-openshift-service-ca\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.726291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0055f07d-1546-45ad-b576-87d016490055-openshift-service-ca\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.753355 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9g6r\" (UniqueName: \"kubernetes.io/projected/0055f07d-1546-45ad-b576-87d016490055-kube-api-access-m9g6r\") pod \"perses-operator-5446b9c989-p8ln7\" (UID: \"0055f07d-1546-45ad-b576-87d016490055\") " pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.822712 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.846482 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j"] Nov 24 12:37:27 crc kubenswrapper[4756]: I1124 12:37:27.907081 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8"] Nov 24 12:37:27 crc kubenswrapper[4756]: W1124 12:37:27.926054 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b44bcfa_0c82_4db2_b4e0_310a76be2b6f.slice/crio-b69c4c5f814b124e8060b7af9c9e8ce38641408648eea670aef518f5bb703472 WatchSource:0}: Error finding container b69c4c5f814b124e8060b7af9c9e8ce38641408648eea670aef518f5bb703472: Status 404 returned error can't find the container with id b69c4c5f814b124e8060b7af9c9e8ce38641408648eea670aef518f5bb703472 Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.024066 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qmk8j"] Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.076252 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-p8ln7"] Nov 24 12:37:28 crc kubenswrapper[4756]: W1124 12:37:28.086905 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0055f07d_1546_45ad_b576_87d016490055.slice/crio-32398ff8a72db48f046fee7ab425e0c70a32749438be20f71f6e650a20c0b151 WatchSource:0}: Error finding container 32398ff8a72db48f046fee7ab425e0c70a32749438be20f71f6e650a20c0b151: Status 404 returned error can't find the container with id 32398ff8a72db48f046fee7ab425e0c70a32749438be20f71f6e650a20c0b151 Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.308514 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" event={"ID":"d1a8e934-b419-4e57-9311-7c8a34745da9","Type":"ContainerStarted","Data":"143f10a32f689e6ddaf4e78d1ead3ee03d755c75c5aafee9d14c624ed48ea285"} Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.310281 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" event={"ID":"0055f07d-1546-45ad-b576-87d016490055","Type":"ContainerStarted","Data":"32398ff8a72db48f046fee7ab425e0c70a32749438be20f71f6e650a20c0b151"} Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.312315 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" event={"ID":"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f","Type":"ContainerStarted","Data":"b69c4c5f814b124e8060b7af9c9e8ce38641408648eea670aef518f5bb703472"} Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.314404 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" event={"ID":"7ee15342-4efd-4e6b-8569-b54b26064eaf","Type":"ContainerStarted","Data":"6915c9960cb7dbb58a0a69d33aa1cf04249e8847beeb2e77cb243f99714398ed"} Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.341234 4756 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" secret="" err="failed to sync secret cache: timed out waiting for the condition" Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.341333 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.387775 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-6nkbz" Nov 24 12:37:28 crc kubenswrapper[4756]: I1124 12:37:28.715919 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk"] Nov 24 12:37:29 crc kubenswrapper[4756]: I1124 12:37:29.338296 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" event={"ID":"0f42bf51-8a6c-4390-83a6-dbae6d26126a","Type":"ContainerStarted","Data":"2cdae2362e43265e5f6ab1092f3117a5041c271a0b5f878f82ae4510b4be19b3"} Nov 24 12:37:33 crc kubenswrapper[4756]: I1124 12:37:33.479605 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:37:33 crc kubenswrapper[4756]: I1124 12:37:33.480271 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:37:33 crc kubenswrapper[4756]: I1124 12:37:33.480322 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:37:33 crc kubenswrapper[4756]: I1124 12:37:33.481095 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:37:33 crc kubenswrapper[4756]: I1124 12:37:33.481177 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b" gracePeriod=600 Nov 24 12:37:34 crc kubenswrapper[4756]: I1124 12:37:34.396939 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b" exitCode=0 Nov 24 12:37:34 crc kubenswrapper[4756]: I1124 12:37:34.397006 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b"} Nov 24 12:37:34 crc kubenswrapper[4756]: I1124 12:37:34.397054 4756 scope.go:117] "RemoveContainer" containerID="d419abe4df6f670f855d594a57ab33aaab6cd64ce42c054a467f81e1256746e2" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.451496 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" event={"ID":"0f42bf51-8a6c-4390-83a6-dbae6d26126a","Type":"ContainerStarted","Data":"8f00c463fdff5d23143b20278b55c9666b91f281d0686ffa86241e0bc429f53b"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.455058 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.456935 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" event={"ID":"7ee15342-4efd-4e6b-8569-b54b26064eaf","Type":"ContainerStarted","Data":"64c76ba2631cd09b2907fe85d64eeddc13bc799dfa6b47b401908b2f7ede12fe"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.457134 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.458702 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" event={"ID":"d1a8e934-b419-4e57-9311-7c8a34745da9","Type":"ContainerStarted","Data":"6011cd68f4856c6bd1eca6aa163f842e764b93a3ab152571925723f6ef9fafbf"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.460369 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" event={"ID":"0055f07d-1546-45ad-b576-87d016490055","Type":"ContainerStarted","Data":"c8a2f63b28a2fe53829a4eb6166882774bbf592e2b4c0e6a19e77f1febfc9172"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.460699 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.462243 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" event={"ID":"7b44bcfa-0c82-4db2-b4e0-310a76be2b6f","Type":"ContainerStarted","Data":"fa42a1cb0c8822a8825aa120abb68d6c6e48e367b812bf65d7c95cf5413be906"} Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.471695 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gw4jk" podStartSLOduration=2.418502458 podStartE2EDuration="15.471676264s" podCreationTimestamp="2025-11-24 12:37:27 +0000 UTC" firstStartedPulling="2025-11-24 12:37:28.731775462 +0000 UTC m=+581.089289604" lastFinishedPulling="2025-11-24 12:37:41.784949278 +0000 UTC m=+594.142463410" observedRunningTime="2025-11-24 12:37:42.470055219 +0000 UTC m=+594.827569381" watchObservedRunningTime="2025-11-24 12:37:42.471676264 +0000 UTC m=+594.829190406" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.493438 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8" podStartSLOduration=1.636013505 podStartE2EDuration="15.493419238s" podCreationTimestamp="2025-11-24 12:37:27 +0000 UTC" firstStartedPulling="2025-11-24 12:37:27.934289533 +0000 UTC m=+580.291803675" lastFinishedPulling="2025-11-24 12:37:41.791695266 +0000 UTC m=+594.149209408" observedRunningTime="2025-11-24 12:37:42.492789941 +0000 UTC m=+594.850304083" watchObservedRunningTime="2025-11-24 12:37:42.493419238 +0000 UTC m=+594.850933380" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.503250 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.535457 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" podStartSLOduration=1.806914573 podStartE2EDuration="15.535437915s" podCreationTimestamp="2025-11-24 12:37:27 +0000 UTC" firstStartedPulling="2025-11-24 12:37:28.090889963 +0000 UTC m=+580.448404105" lastFinishedPulling="2025-11-24 12:37:41.819413305 +0000 UTC m=+594.176927447" observedRunningTime="2025-11-24 12:37:42.534666654 +0000 UTC m=+594.892180796" watchObservedRunningTime="2025-11-24 12:37:42.535437915 +0000 UTC m=+594.892952047" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.567009 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-qmk8j" podStartSLOduration=1.784052284 podStartE2EDuration="15.566988082s" podCreationTimestamp="2025-11-24 12:37:27 +0000 UTC" firstStartedPulling="2025-11-24 12:37:28.037778414 +0000 UTC m=+580.395292556" lastFinishedPulling="2025-11-24 12:37:41.820714222 +0000 UTC m=+594.178228354" observedRunningTime="2025-11-24 12:37:42.565248563 +0000 UTC m=+594.922762715" watchObservedRunningTime="2025-11-24 12:37:42.566988082 +0000 UTC m=+594.924502224" Nov 24 12:37:42 crc kubenswrapper[4756]: I1124 12:37:42.626022 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j" podStartSLOduration=1.6755525279999999 podStartE2EDuration="15.626001421s" podCreationTimestamp="2025-11-24 12:37:27 +0000 UTC" firstStartedPulling="2025-11-24 12:37:27.878831327 +0000 UTC m=+580.236345459" lastFinishedPulling="2025-11-24 12:37:41.82928021 +0000 UTC m=+594.186794352" observedRunningTime="2025-11-24 12:37:42.590274269 +0000 UTC m=+594.947788441" watchObservedRunningTime="2025-11-24 12:37:42.626001421 +0000 UTC m=+594.983515563" Nov 24 12:37:47 crc kubenswrapper[4756]: I1124 12:37:47.825198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-p8ln7" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.805947 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb"] Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.808700 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.811348 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.821668 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb"] Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.882107 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.882258 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.882285 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvh4m\" (UniqueName: \"kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.983672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.983755 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.983785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvh4m\" (UniqueName: \"kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.984626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:07 crc kubenswrapper[4756]: I1124 12:38:07.984678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.006604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvh4m\" (UniqueName: \"kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.127337 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.346739 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb"] Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.739573 4756 generic.go:334] "Generic (PLEG): container finished" podID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerID="43bcac0d6f8b929f5c275e74db88471708cae94204927a58fc0eb6508ff796ec" exitCode=0 Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.739697 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" event={"ID":"6430b3f4-a359-4bb6-abd8-a0a4e39183b8","Type":"ContainerDied","Data":"43bcac0d6f8b929f5c275e74db88471708cae94204927a58fc0eb6508ff796ec"} Nov 24 12:38:08 crc kubenswrapper[4756]: I1124 12:38:08.739973 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" event={"ID":"6430b3f4-a359-4bb6-abd8-a0a4e39183b8","Type":"ContainerStarted","Data":"7fa3e7230f9ae915b1e7e34a422245828e82b0570f2ca703e97811e782aaf5a6"} Nov 24 12:38:10 crc kubenswrapper[4756]: I1124 12:38:10.756450 4756 generic.go:334] "Generic (PLEG): container finished" podID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerID="ef57d1dc85123591cfced07e011cbb525a256e871b0f61848544b246d15db3c8" exitCode=0 Nov 24 12:38:10 crc kubenswrapper[4756]: I1124 12:38:10.756644 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" event={"ID":"6430b3f4-a359-4bb6-abd8-a0a4e39183b8","Type":"ContainerDied","Data":"ef57d1dc85123591cfced07e011cbb525a256e871b0f61848544b246d15db3c8"} Nov 24 12:38:11 crc kubenswrapper[4756]: I1124 12:38:11.763852 4756 generic.go:334] "Generic (PLEG): container finished" podID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerID="9811947080e3193bef2d67b7cee02c3c54c2a6021416da346ac9ec5d6b7df5a5" exitCode=0 Nov 24 12:38:11 crc kubenswrapper[4756]: I1124 12:38:11.763891 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" event={"ID":"6430b3f4-a359-4bb6-abd8-a0a4e39183b8","Type":"ContainerDied","Data":"9811947080e3193bef2d67b7cee02c3c54c2a6021416da346ac9ec5d6b7df5a5"} Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.088357 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.160477 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util\") pod \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.160582 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle\") pod \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.160655 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvh4m\" (UniqueName: \"kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m\") pod \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\" (UID: \"6430b3f4-a359-4bb6-abd8-a0a4e39183b8\") " Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.161997 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle" (OuterVolumeSpecName: "bundle") pod "6430b3f4-a359-4bb6-abd8-a0a4e39183b8" (UID: "6430b3f4-a359-4bb6-abd8-a0a4e39183b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.167815 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m" (OuterVolumeSpecName: "kube-api-access-mvh4m") pod "6430b3f4-a359-4bb6-abd8-a0a4e39183b8" (UID: "6430b3f4-a359-4bb6-abd8-a0a4e39183b8"). InnerVolumeSpecName "kube-api-access-mvh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.175085 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util" (OuterVolumeSpecName: "util") pod "6430b3f4-a359-4bb6-abd8-a0a4e39183b8" (UID: "6430b3f4-a359-4bb6-abd8-a0a4e39183b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.262677 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.262736 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.262746 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvh4m\" (UniqueName: \"kubernetes.io/projected/6430b3f4-a359-4bb6-abd8-a0a4e39183b8-kube-api-access-mvh4m\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.781898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" event={"ID":"6430b3f4-a359-4bb6-abd8-a0a4e39183b8","Type":"ContainerDied","Data":"7fa3e7230f9ae915b1e7e34a422245828e82b0570f2ca703e97811e782aaf5a6"} Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.782197 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa3e7230f9ae915b1e7e34a422245828e82b0570f2ca703e97811e782aaf5a6" Nov 24 12:38:13 crc kubenswrapper[4756]: I1124 12:38:13.782289 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.418815 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-9gdrp"] Nov 24 12:38:15 crc kubenswrapper[4756]: E1124 12:38:15.419590 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="extract" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.419612 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="extract" Nov 24 12:38:15 crc kubenswrapper[4756]: E1124 12:38:15.419635 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="util" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.419645 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="util" Nov 24 12:38:15 crc kubenswrapper[4756]: E1124 12:38:15.419655 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="pull" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.419665 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="pull" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.419812 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6430b3f4-a359-4bb6-abd8-a0a4e39183b8" containerName="extract" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.420566 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.424105 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ddw7t" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.424178 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.424471 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.436958 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-9gdrp"] Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.496023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjhk\" (UniqueName: \"kubernetes.io/projected/7d382e59-3e6d-496e-b637-3ef4848ddc24-kube-api-access-9vjhk\") pod \"nmstate-operator-557fdffb88-9gdrp\" (UID: \"7d382e59-3e6d-496e-b637-3ef4848ddc24\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.597819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjhk\" (UniqueName: \"kubernetes.io/projected/7d382e59-3e6d-496e-b637-3ef4848ddc24-kube-api-access-9vjhk\") pod \"nmstate-operator-557fdffb88-9gdrp\" (UID: \"7d382e59-3e6d-496e-b637-3ef4848ddc24\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.617944 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjhk\" (UniqueName: \"kubernetes.io/projected/7d382e59-3e6d-496e-b637-3ef4848ddc24-kube-api-access-9vjhk\") pod \"nmstate-operator-557fdffb88-9gdrp\" (UID: \"7d382e59-3e6d-496e-b637-3ef4848ddc24\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.744982 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" Nov 24 12:38:15 crc kubenswrapper[4756]: I1124 12:38:15.989520 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-9gdrp"] Nov 24 12:38:16 crc kubenswrapper[4756]: I1124 12:38:16.800330 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" event={"ID":"7d382e59-3e6d-496e-b637-3ef4848ddc24","Type":"ContainerStarted","Data":"2a6bf78b88acb7913e31f10567451e124c9da58e2987be7c0921fda988408aa6"} Nov 24 12:38:18 crc kubenswrapper[4756]: I1124 12:38:18.817639 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" event={"ID":"7d382e59-3e6d-496e-b637-3ef4848ddc24","Type":"ContainerStarted","Data":"7005094845e0efcb40f680dcf1794aae87c21a587add42d56c1af624309e7325"} Nov 24 12:38:18 crc kubenswrapper[4756]: I1124 12:38:18.846739 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-9gdrp" podStartSLOduration=1.768680987 podStartE2EDuration="3.84672229s" podCreationTimestamp="2025-11-24 12:38:15 +0000 UTC" firstStartedPulling="2025-11-24 12:38:16.002957276 +0000 UTC m=+628.360471418" lastFinishedPulling="2025-11-24 12:38:18.080998579 +0000 UTC m=+630.438512721" observedRunningTime="2025-11-24 12:38:18.835134268 +0000 UTC m=+631.192648410" watchObservedRunningTime="2025-11-24 12:38:18.84672229 +0000 UTC m=+631.204236432" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.851466 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf"] Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.852386 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.856661 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-74xq7" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.857352 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k"] Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.858106 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.859312 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.861320 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrnw\" (UniqueName: \"kubernetes.io/projected/6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1-kube-api-access-pdrnw\") pod \"nmstate-metrics-5dcf9c57c5-4c8wf\" (UID: \"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.873482 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf"] Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.878834 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k"] Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.907457 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hg5kh"] Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.908392 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962274 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrnw\" (UniqueName: \"kubernetes.io/projected/6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1-kube-api-access-pdrnw\") pod \"nmstate-metrics-5dcf9c57c5-4c8wf\" (UID: \"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962340 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-nmstate-lock\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962386 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pcz\" (UniqueName: \"kubernetes.io/projected/28634c25-efc4-43b6-92c5-0bc6b20aa941-kube-api-access-n2pcz\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962414 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-dbus-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962448 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28634c25-efc4-43b6-92c5-0bc6b20aa941-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962466 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-ovs-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.962488 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4x5\" (UniqueName: \"kubernetes.io/projected/bb1daace-bab0-41df-a60c-cc01cd7013ea-kube-api-access-ch4x5\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:19 crc kubenswrapper[4756]: I1124 12:38:19.989609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrnw\" (UniqueName: \"kubernetes.io/projected/6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1-kube-api-access-pdrnw\") pod \"nmstate-metrics-5dcf9c57c5-4c8wf\" (UID: \"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.009445 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.010496 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.013364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.014932 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.015380 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bd85b" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.054619 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.063617 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmfnk\" (UniqueName: \"kubernetes.io/projected/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-kube-api-access-zmfnk\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.063789 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pcz\" (UniqueName: \"kubernetes.io/projected/28634c25-efc4-43b6-92c5-0bc6b20aa941-kube-api-access-n2pcz\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.063863 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-dbus-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.063927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28634c25-efc4-43b6-92c5-0bc6b20aa941-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064113 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-ovs-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4x5\" (UniqueName: \"kubernetes.io/projected/bb1daace-bab0-41df-a60c-cc01cd7013ea-kube-api-access-ch4x5\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064293 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064372 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-nmstate-lock\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.064524 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-nmstate-lock\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.065048 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-dbus-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.065405 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bb1daace-bab0-41df-a60c-cc01cd7013ea-ovs-socket\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.068594 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28634c25-efc4-43b6-92c5-0bc6b20aa941-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.086974 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4x5\" (UniqueName: \"kubernetes.io/projected/bb1daace-bab0-41df-a60c-cc01cd7013ea-kube-api-access-ch4x5\") pod \"nmstate-handler-hg5kh\" (UID: \"bb1daace-bab0-41df-a60c-cc01cd7013ea\") " pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.088847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pcz\" (UniqueName: \"kubernetes.io/projected/28634c25-efc4-43b6-92c5-0bc6b20aa941-kube-api-access-n2pcz\") pod \"nmstate-webhook-6b89b748d8-p6r5k\" (UID: \"28634c25-efc4-43b6-92c5-0bc6b20aa941\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.166072 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmfnk\" (UniqueName: \"kubernetes.io/projected/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-kube-api-access-zmfnk\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.166396 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.166534 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.168156 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.172043 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.177633 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.187344 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmfnk\" (UniqueName: \"kubernetes.io/projected/c0c918b6-55ce-4aa8-b777-1b442a5c0ea9-kube-api-access-zmfnk\") pod \"nmstate-console-plugin-5874bd7bc5-xbnb6\" (UID: \"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.189988 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.207381 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bd46cddf-6hv79"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.208589 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.218282 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bd46cddf-6hv79"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.225786 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.267948 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77jp\" (UniqueName: \"kubernetes.io/projected/a0e99d9f-15a2-4dab-be99-3409559e9b55-kube-api-access-m77jp\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268020 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-service-ca\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268058 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-oauth-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268298 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268439 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-oauth-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.268474 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-trusted-ca-bundle\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.327832 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.370293 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.370400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371320 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.370432 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-oauth-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371408 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-trusted-ca-bundle\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371426 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-oauth-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m77jp\" (UniqueName: \"kubernetes.io/projected/a0e99d9f-15a2-4dab-be99-3409559e9b55-kube-api-access-m77jp\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371577 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-service-ca\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.371620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-oauth-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.372410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-trusted-ca-bundle\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.372454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99d9f-15a2-4dab-be99-3409559e9b55-service-ca\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.375386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-oauth-config\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.383757 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99d9f-15a2-4dab-be99-3409559e9b55-console-serving-cert\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.389199 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77jp\" (UniqueName: \"kubernetes.io/projected/a0e99d9f-15a2-4dab-be99-3409559e9b55-kube-api-access-m77jp\") pod \"console-5bd46cddf-6hv79\" (UID: \"a0e99d9f-15a2-4dab-be99-3409559e9b55\") " pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.466441 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.538281 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.553462 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6"] Nov 24 12:38:20 crc kubenswrapper[4756]: W1124 12:38:20.560600 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c918b6_55ce_4aa8_b777_1b442a5c0ea9.slice/crio-d794b9964a6877ba0449e057a485d7db667d1a2913223ffd4a1dcb08106a4fd3 WatchSource:0}: Error finding container d794b9964a6877ba0449e057a485d7db667d1a2913223ffd4a1dcb08106a4fd3: Status 404 returned error can't find the container with id d794b9964a6877ba0449e057a485d7db667d1a2913223ffd4a1dcb08106a4fd3 Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.615952 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k"] Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.715105 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bd46cddf-6hv79"] Nov 24 12:38:20 crc kubenswrapper[4756]: W1124 12:38:20.723210 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e99d9f_15a2_4dab_be99_3409559e9b55.slice/crio-38e5012eae5078dd8513ccc981b251c483ad03f506876ccc648d60f2c4692f41 WatchSource:0}: Error finding container 38e5012eae5078dd8513ccc981b251c483ad03f506876ccc648d60f2c4692f41: Status 404 returned error can't find the container with id 38e5012eae5078dd8513ccc981b251c483ad03f506876ccc648d60f2c4692f41 Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.832119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" event={"ID":"28634c25-efc4-43b6-92c5-0bc6b20aa941","Type":"ContainerStarted","Data":"4dd00d9d075b9dd1aabf2fa2cb04f898944b41f6b7cb963799ceb888b2e1f779"} Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.833133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" event={"ID":"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9","Type":"ContainerStarted","Data":"d794b9964a6877ba0449e057a485d7db667d1a2913223ffd4a1dcb08106a4fd3"} Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.834111 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bd46cddf-6hv79" event={"ID":"a0e99d9f-15a2-4dab-be99-3409559e9b55","Type":"ContainerStarted","Data":"38e5012eae5078dd8513ccc981b251c483ad03f506876ccc648d60f2c4692f41"} Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.835213 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hg5kh" event={"ID":"bb1daace-bab0-41df-a60c-cc01cd7013ea","Type":"ContainerStarted","Data":"9922ff172784dbaa9ed2e8d961b462339d1e066ef8ce6afa4aa1a326a56e2ba3"} Nov 24 12:38:20 crc kubenswrapper[4756]: I1124 12:38:20.837113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" event={"ID":"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1","Type":"ContainerStarted","Data":"1addb6604b95e6fdbe4a2b83671da8d73188417faab252001b275d3139b629fb"} Nov 24 12:38:21 crc kubenswrapper[4756]: I1124 12:38:21.845631 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bd46cddf-6hv79" event={"ID":"a0e99d9f-15a2-4dab-be99-3409559e9b55","Type":"ContainerStarted","Data":"63fa68efc03807902e5aa780daae3418c91d8fa74dc703ffedbf932bacb17a51"} Nov 24 12:38:21 crc kubenswrapper[4756]: I1124 12:38:21.869601 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bd46cddf-6hv79" podStartSLOduration=1.869564528 podStartE2EDuration="1.869564528s" podCreationTimestamp="2025-11-24 12:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:38:21.865521916 +0000 UTC m=+634.223036078" watchObservedRunningTime="2025-11-24 12:38:21.869564528 +0000 UTC m=+634.227078670" Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.862633 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" event={"ID":"28634c25-efc4-43b6-92c5-0bc6b20aa941","Type":"ContainerStarted","Data":"5dea2f3847934d35412f94d7550ff1c83d10942d4a2d8df0ce109bddbb8bf240"} Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.863597 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.864898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" event={"ID":"c0c918b6-55ce-4aa8-b777-1b442a5c0ea9","Type":"ContainerStarted","Data":"51d2bb2dc0b239ee6cec318b78b8c6e6ae5b0a52a4e43c0f2d636a0c0bf105cf"} Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.867613 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hg5kh" event={"ID":"bb1daace-bab0-41df-a60c-cc01cd7013ea","Type":"ContainerStarted","Data":"9f00eb8ab7b6c3cf72e0c7c6eae554a9beaa616b8ac9262b8ba040ea547713c5"} Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.868023 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.869525 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" event={"ID":"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1","Type":"ContainerStarted","Data":"9e7eb2833155b9adda6d5f5f888fca4765ba3f9bc819c486fb1b76374db3cc68"} Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.910267 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" podStartSLOduration=2.046843414 podStartE2EDuration="4.910250774s" podCreationTimestamp="2025-11-24 12:38:19 +0000 UTC" firstStartedPulling="2025-11-24 12:38:20.626744155 +0000 UTC m=+632.984258297" lastFinishedPulling="2025-11-24 12:38:23.490151515 +0000 UTC m=+635.847665657" observedRunningTime="2025-11-24 12:38:23.888143639 +0000 UTC m=+636.245657781" watchObservedRunningTime="2025-11-24 12:38:23.910250774 +0000 UTC m=+636.267764916" Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.911518 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hg5kh" podStartSLOduration=1.666635212 podStartE2EDuration="4.911511849s" podCreationTimestamp="2025-11-24 12:38:19 +0000 UTC" firstStartedPulling="2025-11-24 12:38:20.263405192 +0000 UTC m=+632.620919334" lastFinishedPulling="2025-11-24 12:38:23.508281829 +0000 UTC m=+635.865795971" observedRunningTime="2025-11-24 12:38:23.910617894 +0000 UTC m=+636.268132046" watchObservedRunningTime="2025-11-24 12:38:23.911511849 +0000 UTC m=+636.269025991" Nov 24 12:38:23 crc kubenswrapper[4756]: I1124 12:38:23.928955 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-xbnb6" podStartSLOduration=2.001260557 podStartE2EDuration="4.928933813s" podCreationTimestamp="2025-11-24 12:38:19 +0000 UTC" firstStartedPulling="2025-11-24 12:38:20.562649554 +0000 UTC m=+632.920163696" lastFinishedPulling="2025-11-24 12:38:23.49032278 +0000 UTC m=+635.847836952" observedRunningTime="2025-11-24 12:38:23.924494149 +0000 UTC m=+636.282008311" watchObservedRunningTime="2025-11-24 12:38:23.928933813 +0000 UTC m=+636.286447955" Nov 24 12:38:25 crc kubenswrapper[4756]: I1124 12:38:25.890429 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" event={"ID":"6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1","Type":"ContainerStarted","Data":"d832bfa42ef48104f26cbfdd8f398b2c4c9165721bd7d4f3d70de1c9d7d84328"} Nov 24 12:38:25 crc kubenswrapper[4756]: I1124 12:38:25.915081 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-4c8wf" podStartSLOduration=1.819830478 podStartE2EDuration="6.915048643s" podCreationTimestamp="2025-11-24 12:38:19 +0000 UTC" firstStartedPulling="2025-11-24 12:38:20.474486115 +0000 UTC m=+632.832000257" lastFinishedPulling="2025-11-24 12:38:25.56970428 +0000 UTC m=+637.927218422" observedRunningTime="2025-11-24 12:38:25.910901538 +0000 UTC m=+638.268415690" watchObservedRunningTime="2025-11-24 12:38:25.915048643 +0000 UTC m=+638.272562785" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.251185 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hg5kh" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.539205 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.539457 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.544906 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.928693 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bd46cddf-6hv79" Nov 24 12:38:30 crc kubenswrapper[4756]: I1124 12:38:30.979730 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:38:40 crc kubenswrapper[4756]: I1124 12:38:40.197089 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-p6r5k" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.234685 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75"] Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.237688 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.243105 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75"] Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.246759 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.340475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.340619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smp69\" (UniqueName: \"kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.340745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.441414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.441488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smp69\" (UniqueName: \"kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.441548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.442087 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.442087 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.484382 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smp69\" (UniqueName: \"kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.566151 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:53 crc kubenswrapper[4756]: I1124 12:38:53.797741 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75"] Nov 24 12:38:54 crc kubenswrapper[4756]: I1124 12:38:54.072292 4756 generic.go:334] "Generic (PLEG): container finished" podID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerID="0a12ec32965253e283411728022cb6c88e869e4416bf3459c8d852cdf3f21016" exitCode=0 Nov 24 12:38:54 crc kubenswrapper[4756]: I1124 12:38:54.072342 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" event={"ID":"ab5660ee-1372-4a12-9dbc-020b356597cd","Type":"ContainerDied","Data":"0a12ec32965253e283411728022cb6c88e869e4416bf3459c8d852cdf3f21016"} Nov 24 12:38:54 crc kubenswrapper[4756]: I1124 12:38:54.072401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" event={"ID":"ab5660ee-1372-4a12-9dbc-020b356597cd","Type":"ContainerStarted","Data":"cc01b5d2066cfe697ebc878a684daa3ad058c3097775af9b1feaea607fc7c1b5"} Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.026818 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-srchr" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" containerID="cri-o://5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9" gracePeriod=15 Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.089983 4756 generic.go:334] "Generic (PLEG): container finished" podID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerID="a0831081061d7155f3cb545a90d7cca0c0af9e445855822729bf3c54ee1cb28e" exitCode=0 Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.090080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" event={"ID":"ab5660ee-1372-4a12-9dbc-020b356597cd","Type":"ContainerDied","Data":"a0831081061d7155f3cb545a90d7cca0c0af9e445855822729bf3c54ee1cb28e"} Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.462835 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-srchr_874bfcf4-b717-4ee9-932f-8b28a2b68eac/console/0.log" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.462909 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587734 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587845 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587885 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587923 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587964 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.587988 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxgp\" (UniqueName: \"kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp\") pod \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\" (UID: \"874bfcf4-b717-4ee9-932f-8b28a2b68eac\") " Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.589038 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.589090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca" (OuterVolumeSpecName: "service-ca") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.589175 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.589174 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config" (OuterVolumeSpecName: "console-config") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.595754 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.596098 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp" (OuterVolumeSpecName: "kube-api-access-nsxgp") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "kube-api-access-nsxgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.601643 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "874bfcf4-b717-4ee9-932f-8b28a2b68eac" (UID: "874bfcf4-b717-4ee9-932f-8b28a2b68eac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689347 4756 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689386 4756 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689400 4756 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689413 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689438 4756 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689450 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxgp\" (UniqueName: \"kubernetes.io/projected/874bfcf4-b717-4ee9-932f-8b28a2b68eac-kube-api-access-nsxgp\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:56 crc kubenswrapper[4756]: I1124 12:38:56.689464 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874bfcf4-b717-4ee9-932f-8b28a2b68eac-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.101376 4756 generic.go:334] "Generic (PLEG): container finished" podID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerID="6ad09f60d00253f1b771ce10332361edfd8f6a48fd42e6c93e7c6318e013057b" exitCode=0 Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.101602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" event={"ID":"ab5660ee-1372-4a12-9dbc-020b356597cd","Type":"ContainerDied","Data":"6ad09f60d00253f1b771ce10332361edfd8f6a48fd42e6c93e7c6318e013057b"} Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103535 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-srchr_874bfcf4-b717-4ee9-932f-8b28a2b68eac/console/0.log" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103590 4756 generic.go:334] "Generic (PLEG): container finished" podID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerID="5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9" exitCode=2 Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103610 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-srchr" event={"ID":"874bfcf4-b717-4ee9-932f-8b28a2b68eac","Type":"ContainerDied","Data":"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9"} Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103627 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-srchr" event={"ID":"874bfcf4-b717-4ee9-932f-8b28a2b68eac","Type":"ContainerDied","Data":"fec9f3d7fc9f16020397b44b94aee0650cf5be3fa359ab9ea6d8bbbbc3e303b7"} Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103655 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-srchr" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.103664 4756 scope.go:117] "RemoveContainer" containerID="5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.123512 4756 scope.go:117] "RemoveContainer" containerID="5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9" Nov 24 12:38:57 crc kubenswrapper[4756]: E1124 12:38:57.123984 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9\": container with ID starting with 5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9 not found: ID does not exist" containerID="5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.124011 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9"} err="failed to get container status \"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9\": rpc error: code = NotFound desc = could not find container \"5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9\": container with ID starting with 5d9367c2f1db2b56be014af4278c6a2ec92c120af8be205db498f076255f29e9 not found: ID does not exist" Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.147290 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:38:57 crc kubenswrapper[4756]: I1124 12:38:57.149411 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-srchr"] Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.355881 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.413708 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle\") pod \"ab5660ee-1372-4a12-9dbc-020b356597cd\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.413776 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util\") pod \"ab5660ee-1372-4a12-9dbc-020b356597cd\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.413875 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smp69\" (UniqueName: \"kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69\") pod \"ab5660ee-1372-4a12-9dbc-020b356597cd\" (UID: \"ab5660ee-1372-4a12-9dbc-020b356597cd\") " Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.415229 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle" (OuterVolumeSpecName: "bundle") pod "ab5660ee-1372-4a12-9dbc-020b356597cd" (UID: "ab5660ee-1372-4a12-9dbc-020b356597cd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.419683 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69" (OuterVolumeSpecName: "kube-api-access-smp69") pod "ab5660ee-1372-4a12-9dbc-020b356597cd" (UID: "ab5660ee-1372-4a12-9dbc-020b356597cd"). InnerVolumeSpecName "kube-api-access-smp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.429849 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util" (OuterVolumeSpecName: "util") pod "ab5660ee-1372-4a12-9dbc-020b356597cd" (UID: "ab5660ee-1372-4a12-9dbc-020b356597cd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.500786 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" path="/var/lib/kubelet/pods/874bfcf4-b717-4ee9-932f-8b28a2b68eac/volumes" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.516060 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smp69\" (UniqueName: \"kubernetes.io/projected/ab5660ee-1372-4a12-9dbc-020b356597cd-kube-api-access-smp69\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.516124 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:58 crc kubenswrapper[4756]: I1124 12:38:58.516136 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab5660ee-1372-4a12-9dbc-020b356597cd-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:38:59 crc kubenswrapper[4756]: I1124 12:38:59.119207 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" event={"ID":"ab5660ee-1372-4a12-9dbc-020b356597cd","Type":"ContainerDied","Data":"cc01b5d2066cfe697ebc878a684daa3ad058c3097775af9b1feaea607fc7c1b5"} Nov 24 12:38:59 crc kubenswrapper[4756]: I1124 12:38:59.119253 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc01b5d2066cfe697ebc878a684daa3ad058c3097775af9b1feaea607fc7c1b5" Nov 24 12:38:59 crc kubenswrapper[4756]: I1124 12:38:59.119308 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.980897 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-585778954f-lwtdb"] Nov 24 12:39:06 crc kubenswrapper[4756]: E1124 12:39:06.982210 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="util" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982235 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="util" Nov 24 12:39:06 crc kubenswrapper[4756]: E1124 12:39:06.982274 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="pull" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982284 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="pull" Nov 24 12:39:06 crc kubenswrapper[4756]: E1124 12:39:06.982301 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="extract" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982308 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="extract" Nov 24 12:39:06 crc kubenswrapper[4756]: E1124 12:39:06.982319 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982324 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982466 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5660ee-1372-4a12-9dbc-020b356597cd" containerName="extract" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.982476 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="874bfcf4-b717-4ee9-932f-8b28a2b68eac" containerName="console" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.983149 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.984984 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.985712 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.986078 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.986622 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-flg5n" Nov 24 12:39:06 crc kubenswrapper[4756]: I1124 12:39:06.987009 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.001905 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-585778954f-lwtdb"] Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.124702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfwq\" (UniqueName: \"kubernetes.io/projected/fe690ebd-7c38-400c-bd3e-ddec63e361ea-kube-api-access-xnfwq\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.124791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-apiservice-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.124887 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-webhook-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.225650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-webhook-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.225943 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfwq\" (UniqueName: \"kubernetes.io/projected/fe690ebd-7c38-400c-bd3e-ddec63e361ea-kube-api-access-xnfwq\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.225979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-apiservice-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.232809 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-apiservice-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.238005 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fe690ebd-7c38-400c-bd3e-ddec63e361ea-webhook-cert\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.251277 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfwq\" (UniqueName: \"kubernetes.io/projected/fe690ebd-7c38-400c-bd3e-ddec63e361ea-kube-api-access-xnfwq\") pod \"metallb-operator-controller-manager-585778954f-lwtdb\" (UID: \"fe690ebd-7c38-400c-bd3e-ddec63e361ea\") " pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.303814 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.367498 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8"] Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.368692 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.371467 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.371485 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-h5sds" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.372400 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.412643 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8"] Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.544312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsctc\" (UniqueName: \"kubernetes.io/projected/df074e39-b784-4804-afd8-3625ad3fecd0-kube-api-access-hsctc\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.544945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.545038 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-webhook-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.646212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsctc\" (UniqueName: \"kubernetes.io/projected/df074e39-b784-4804-afd8-3625ad3fecd0-kube-api-access-hsctc\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.646309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.646374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-webhook-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.655050 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-apiservice-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.658549 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df074e39-b784-4804-afd8-3625ad3fecd0-webhook-cert\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.682252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsctc\" (UniqueName: \"kubernetes.io/projected/df074e39-b784-4804-afd8-3625ad3fecd0-kube-api-access-hsctc\") pod \"metallb-operator-webhook-server-7bbc7fc897-wd2m8\" (UID: \"df074e39-b784-4804-afd8-3625ad3fecd0\") " pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.697683 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.769068 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-585778954f-lwtdb"] Nov 24 12:39:07 crc kubenswrapper[4756]: W1124 12:39:07.774700 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe690ebd_7c38_400c_bd3e_ddec63e361ea.slice/crio-7d334388c9f9fc7fc1e3057fe73850a5b0bf202b049a50f184a59c6addf028ec WatchSource:0}: Error finding container 7d334388c9f9fc7fc1e3057fe73850a5b0bf202b049a50f184a59c6addf028ec: Status 404 returned error can't find the container with id 7d334388c9f9fc7fc1e3057fe73850a5b0bf202b049a50f184a59c6addf028ec Nov 24 12:39:07 crc kubenswrapper[4756]: I1124 12:39:07.946837 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8"] Nov 24 12:39:07 crc kubenswrapper[4756]: W1124 12:39:07.956332 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf074e39_b784_4804_afd8_3625ad3fecd0.slice/crio-e7231f4beaf13247f51f96299c6d418884a5c5f55da9ed3d6c014c1a0ebe9cae WatchSource:0}: Error finding container e7231f4beaf13247f51f96299c6d418884a5c5f55da9ed3d6c014c1a0ebe9cae: Status 404 returned error can't find the container with id e7231f4beaf13247f51f96299c6d418884a5c5f55da9ed3d6c014c1a0ebe9cae Nov 24 12:39:08 crc kubenswrapper[4756]: I1124 12:39:08.168351 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" event={"ID":"df074e39-b784-4804-afd8-3625ad3fecd0","Type":"ContainerStarted","Data":"e7231f4beaf13247f51f96299c6d418884a5c5f55da9ed3d6c014c1a0ebe9cae"} Nov 24 12:39:08 crc kubenswrapper[4756]: I1124 12:39:08.169669 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" event={"ID":"fe690ebd-7c38-400c-bd3e-ddec63e361ea","Type":"ContainerStarted","Data":"7d334388c9f9fc7fc1e3057fe73850a5b0bf202b049a50f184a59c6addf028ec"} Nov 24 12:39:12 crc kubenswrapper[4756]: I1124 12:39:12.199634 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" event={"ID":"fe690ebd-7c38-400c-bd3e-ddec63e361ea","Type":"ContainerStarted","Data":"966aa6310c3943ab18add7fe00cf6c236adc736ded922494c7d1f63d7b53d7ac"} Nov 24 12:39:12 crc kubenswrapper[4756]: I1124 12:39:12.200233 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:12 crc kubenswrapper[4756]: I1124 12:39:12.245353 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" podStartSLOduration=2.932824466 podStartE2EDuration="6.245335513s" podCreationTimestamp="2025-11-24 12:39:06 +0000 UTC" firstStartedPulling="2025-11-24 12:39:07.779031491 +0000 UTC m=+680.136545633" lastFinishedPulling="2025-11-24 12:39:11.091542538 +0000 UTC m=+683.449056680" observedRunningTime="2025-11-24 12:39:12.243829352 +0000 UTC m=+684.601343504" watchObservedRunningTime="2025-11-24 12:39:12.245335513 +0000 UTC m=+684.602849655" Nov 24 12:39:13 crc kubenswrapper[4756]: I1124 12:39:13.205765 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" event={"ID":"df074e39-b784-4804-afd8-3625ad3fecd0","Type":"ContainerStarted","Data":"289af321060f99045d2d39d56cb5e8ddfb256a718cb09ad7a4e2a52411853bae"} Nov 24 12:39:13 crc kubenswrapper[4756]: I1124 12:39:13.227586 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" podStartSLOduration=1.1553858 podStartE2EDuration="6.227568347s" podCreationTimestamp="2025-11-24 12:39:07 +0000 UTC" firstStartedPulling="2025-11-24 12:39:07.958184537 +0000 UTC m=+680.315698669" lastFinishedPulling="2025-11-24 12:39:13.030367074 +0000 UTC m=+685.387881216" observedRunningTime="2025-11-24 12:39:13.222787908 +0000 UTC m=+685.580302060" watchObservedRunningTime="2025-11-24 12:39:13.227568347 +0000 UTC m=+685.585082489" Nov 24 12:39:14 crc kubenswrapper[4756]: I1124 12:39:14.210944 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:24 crc kubenswrapper[4756]: I1124 12:39:24.997894 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:39:24 crc kubenswrapper[4756]: I1124 12:39:24.999520 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerName="controller-manager" containerID="cri-o://35a95105bdfd65f9b175b660c5f303b4c62f873a0cbe3d3506c0513683eabb56" gracePeriod=30 Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.003407 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.003643 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" podUID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" containerName="route-controller-manager" containerID="cri-o://457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576" gracePeriod=30 Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.281223 4756 generic.go:334] "Generic (PLEG): container finished" podID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerID="35a95105bdfd65f9b175b660c5f303b4c62f873a0cbe3d3506c0513683eabb56" exitCode=0 Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.281327 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" event={"ID":"76ac3240-bc3d-4688-9aa1-1976279a656d","Type":"ContainerDied","Data":"35a95105bdfd65f9b175b660c5f303b4c62f873a0cbe3d3506c0513683eabb56"} Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.885937 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.998458 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles\") pod \"76ac3240-bc3d-4688-9aa1-1976279a656d\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.998844 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdrrp\" (UniqueName: \"kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp\") pod \"76ac3240-bc3d-4688-9aa1-1976279a656d\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.998899 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca\") pod \"76ac3240-bc3d-4688-9aa1-1976279a656d\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.998939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config\") pod \"76ac3240-bc3d-4688-9aa1-1976279a656d\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.998955 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert\") pod \"76ac3240-bc3d-4688-9aa1-1976279a656d\" (UID: \"76ac3240-bc3d-4688-9aa1-1976279a656d\") " Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.999333 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76ac3240-bc3d-4688-9aa1-1976279a656d" (UID: "76ac3240-bc3d-4688-9aa1-1976279a656d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:25 crc kubenswrapper[4756]: I1124 12:39:25.999632 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca" (OuterVolumeSpecName: "client-ca") pod "76ac3240-bc3d-4688-9aa1-1976279a656d" (UID: "76ac3240-bc3d-4688-9aa1-1976279a656d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.000373 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config" (OuterVolumeSpecName: "config") pod "76ac3240-bc3d-4688-9aa1-1976279a656d" (UID: "76ac3240-bc3d-4688-9aa1-1976279a656d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.008761 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp" (OuterVolumeSpecName: "kube-api-access-vdrrp") pod "76ac3240-bc3d-4688-9aa1-1976279a656d" (UID: "76ac3240-bc3d-4688-9aa1-1976279a656d"). InnerVolumeSpecName "kube-api-access-vdrrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.017725 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76ac3240-bc3d-4688-9aa1-1976279a656d" (UID: "76ac3240-bc3d-4688-9aa1-1976279a656d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.037363 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.103831 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.103880 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ac3240-bc3d-4688-9aa1-1976279a656d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.103895 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.103907 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdrrp\" (UniqueName: \"kubernetes.io/projected/76ac3240-bc3d-4688-9aa1-1976279a656d-kube-api-access-vdrrp\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.103916 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ac3240-bc3d-4688-9aa1-1976279a656d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.205132 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca\") pod \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.205206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjx88\" (UniqueName: \"kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88\") pod \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.205277 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config\") pod \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.205304 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert\") pod \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\" (UID: \"ea390924-dfd9-4c47-90f2-ca9d413e7c5f\") " Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.206366 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config" (OuterVolumeSpecName: "config") pod "ea390924-dfd9-4c47-90f2-ca9d413e7c5f" (UID: "ea390924-dfd9-4c47-90f2-ca9d413e7c5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.206424 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ea390924-dfd9-4c47-90f2-ca9d413e7c5f" (UID: "ea390924-dfd9-4c47-90f2-ca9d413e7c5f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.212812 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ea390924-dfd9-4c47-90f2-ca9d413e7c5f" (UID: "ea390924-dfd9-4c47-90f2-ca9d413e7c5f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.213783 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88" (OuterVolumeSpecName: "kube-api-access-cjx88") pod "ea390924-dfd9-4c47-90f2-ca9d413e7c5f" (UID: "ea390924-dfd9-4c47-90f2-ca9d413e7c5f"). InnerVolumeSpecName "kube-api-access-cjx88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.291766 4756 generic.go:334] "Generic (PLEG): container finished" podID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" containerID="457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576" exitCode=0 Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.291813 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.291856 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" event={"ID":"ea390924-dfd9-4c47-90f2-ca9d413e7c5f","Type":"ContainerDied","Data":"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576"} Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.291888 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz" event={"ID":"ea390924-dfd9-4c47-90f2-ca9d413e7c5f","Type":"ContainerDied","Data":"c8856cf6758b553d60c114c05cc00b1e0c391231e50e160985fc196425687fac"} Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.291927 4756 scope.go:117] "RemoveContainer" containerID="457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.294114 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" event={"ID":"76ac3240-bc3d-4688-9aa1-1976279a656d","Type":"ContainerDied","Data":"969ba4ed95555ba09c4b2ca06bd04a39676b65b8fc928966bd8370334701f36f"} Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.294217 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mrnbw" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.317076 4756 scope.go:117] "RemoveContainer" containerID="457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576" Nov 24 12:39:26 crc kubenswrapper[4756]: E1124 12:39:26.318374 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576\": container with ID starting with 457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576 not found: ID does not exist" containerID="457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.318462 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576"} err="failed to get container status \"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576\": rpc error: code = NotFound desc = could not find container \"457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576\": container with ID starting with 457f026b0c40ee1a60f452cfd9e7b52f6f4956243d37a13c33f37daa7ee84576 not found: ID does not exist" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.318500 4756 scope.go:117] "RemoveContainer" containerID="35a95105bdfd65f9b175b660c5f303b4c62f873a0cbe3d3506c0513683eabb56" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.329651 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.329706 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.329722 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.329743 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjx88\" (UniqueName: \"kubernetes.io/projected/ea390924-dfd9-4c47-90f2-ca9d413e7c5f-kube-api-access-cjx88\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.360613 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.362209 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cl7lz"] Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.375477 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.378997 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mrnbw"] Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.483272 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" path="/var/lib/kubelet/pods/76ac3240-bc3d-4688-9aa1-1976279a656d/volumes" Nov 24 12:39:26 crc kubenswrapper[4756]: I1124 12:39:26.483951 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" path="/var/lib/kubelet/pods/ea390924-dfd9-4c47-90f2-ca9d413e7c5f/volumes" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.034651 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz"] Nov 24 12:39:27 crc kubenswrapper[4756]: E1124 12:39:27.035149 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" containerName="route-controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.035219 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" containerName="route-controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: E1124 12:39:27.035233 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerName="controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.035244 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerName="controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.035421 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ac3240-bc3d-4688-9aa1-1976279a656d" containerName="controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.035435 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea390924-dfd9-4c47-90f2-ca9d413e7c5f" containerName="route-controller-manager" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.036271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.040092 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.040092 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.040211 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.040616 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.042360 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.043362 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59667f5cb8-5rw86"] Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.046211 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.053212 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.059756 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.060781 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.060960 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.061194 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.067307 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59667f5cb8-5rw86"] Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.069680 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.077487 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.077630 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.102685 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz"] Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.141367 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-client-ca\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.141704 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59xr\" (UniqueName: \"kubernetes.io/projected/d63a6096-f9df-4200-8423-83a4a4acc5dc-kube-api-access-l59xr\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.141819 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-proxy-ca-bundles\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.141929 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dz46\" (UniqueName: \"kubernetes.io/projected/7284cd32-4994-4b11-a1bf-91af1f2a5c46-kube-api-access-9dz46\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.142039 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-config\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.142124 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7284cd32-4994-4b11-a1bf-91af1f2a5c46-serving-cert\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.142243 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-config\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.142363 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63a6096-f9df-4200-8423-83a4a4acc5dc-serving-cert\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.142507 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-client-ca\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-proxy-ca-bundles\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dz46\" (UniqueName: \"kubernetes.io/projected/7284cd32-4994-4b11-a1bf-91af1f2a5c46-kube-api-access-9dz46\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244546 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-config\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244570 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7284cd32-4994-4b11-a1bf-91af1f2a5c46-serving-cert\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244595 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-config\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244624 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63a6096-f9df-4200-8423-83a4a4acc5dc-serving-cert\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244647 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-client-ca\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-client-ca\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.244704 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59xr\" (UniqueName: \"kubernetes.io/projected/d63a6096-f9df-4200-8423-83a4a4acc5dc-kube-api-access-l59xr\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.246240 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-client-ca\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.246305 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-proxy-ca-bundles\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.246479 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-client-ca\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.246746 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63a6096-f9df-4200-8423-83a4a4acc5dc-config\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.246975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7284cd32-4994-4b11-a1bf-91af1f2a5c46-config\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.252889 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7284cd32-4994-4b11-a1bf-91af1f2a5c46-serving-cert\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.252940 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63a6096-f9df-4200-8423-83a4a4acc5dc-serving-cert\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.268374 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59xr\" (UniqueName: \"kubernetes.io/projected/d63a6096-f9df-4200-8423-83a4a4acc5dc-kube-api-access-l59xr\") pod \"route-controller-manager-66c55885bb-4kfjz\" (UID: \"d63a6096-f9df-4200-8423-83a4a4acc5dc\") " pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.268711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dz46\" (UniqueName: \"kubernetes.io/projected/7284cd32-4994-4b11-a1bf-91af1f2a5c46-kube-api-access-9dz46\") pod \"controller-manager-59667f5cb8-5rw86\" (UID: \"7284cd32-4994-4b11-a1bf-91af1f2a5c46\") " pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.359684 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.397200 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.693884 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz"] Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.712694 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bbc7fc897-wd2m8" Nov 24 12:39:27 crc kubenswrapper[4756]: I1124 12:39:27.968128 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59667f5cb8-5rw86"] Nov 24 12:39:27 crc kubenswrapper[4756]: W1124 12:39:27.979618 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7284cd32_4994_4b11_a1bf_91af1f2a5c46.slice/crio-a8a54d947bcf9a09cd6fc2896667776404dea79b4b52174d8c0e77f5f2586cb2 WatchSource:0}: Error finding container a8a54d947bcf9a09cd6fc2896667776404dea79b4b52174d8c0e77f5f2586cb2: Status 404 returned error can't find the container with id a8a54d947bcf9a09cd6fc2896667776404dea79b4b52174d8c0e77f5f2586cb2 Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.315404 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" event={"ID":"d63a6096-f9df-4200-8423-83a4a4acc5dc","Type":"ContainerStarted","Data":"78dd1d95aabf2d19b102886264d47f1295e2aa717d7a965bc53b3165209a01b9"} Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.315482 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" event={"ID":"d63a6096-f9df-4200-8423-83a4a4acc5dc","Type":"ContainerStarted","Data":"38f08f327dbf3dd03baad8a35cf254be30fae68e78811d47fa9930d007e36b8a"} Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.315609 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.319598 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" event={"ID":"7284cd32-4994-4b11-a1bf-91af1f2a5c46","Type":"ContainerStarted","Data":"4afff1dd1268fa5c311e08dc5467ebb69c1bcbe5bbdb12b34be0db34d1d97f1c"} Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.319636 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" event={"ID":"7284cd32-4994-4b11-a1bf-91af1f2a5c46","Type":"ContainerStarted","Data":"a8a54d947bcf9a09cd6fc2896667776404dea79b4b52174d8c0e77f5f2586cb2"} Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.319917 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.332314 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.347446 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" podStartSLOduration=3.347413229 podStartE2EDuration="3.347413229s" podCreationTimestamp="2025-11-24 12:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:39:28.342239509 +0000 UTC m=+700.699753651" watchObservedRunningTime="2025-11-24 12:39:28.347413229 +0000 UTC m=+700.704927381" Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.349566 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66c55885bb-4kfjz" Nov 24 12:39:28 crc kubenswrapper[4756]: I1124 12:39:28.389473 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59667f5cb8-5rw86" podStartSLOduration=3.389452623 podStartE2EDuration="3.389452623s" podCreationTimestamp="2025-11-24 12:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:39:28.38709519 +0000 UTC m=+700.744609342" watchObservedRunningTime="2025-11-24 12:39:28.389452623 +0000 UTC m=+700.746966765" Nov 24 12:39:32 crc kubenswrapper[4756]: I1124 12:39:32.634093 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.637564 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.639630 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.655473 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.793710 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zg4\" (UniqueName: \"kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.793773 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.793934 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.894999 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zg4\" (UniqueName: \"kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.895069 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.895130 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.895585 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.895602 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.914386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zg4\" (UniqueName: \"kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4\") pod \"certified-operators-7cvcn\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:44 crc kubenswrapper[4756]: I1124 12:39:44.959170 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:45 crc kubenswrapper[4756]: I1124 12:39:45.413913 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:45 crc kubenswrapper[4756]: I1124 12:39:45.426294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerStarted","Data":"fd79f7f2cbf143f9e01793eac048d07b327343ffca9a1f4493114e05a663dd91"} Nov 24 12:39:46 crc kubenswrapper[4756]: I1124 12:39:46.434980 4756 generic.go:334] "Generic (PLEG): container finished" podID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerID="0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6" exitCode=0 Nov 24 12:39:46 crc kubenswrapper[4756]: I1124 12:39:46.435358 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerDied","Data":"0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6"} Nov 24 12:39:47 crc kubenswrapper[4756]: I1124 12:39:47.306988 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-585778954f-lwtdb" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.013693 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-v5pd2"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.016910 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.019648 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.019729 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.021269 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bct29" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.024709 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2db82"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.025436 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.027264 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.035481 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2db82"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.095600 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-r8zzq"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.096522 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.098601 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-snlnl" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.098893 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.099076 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.099109 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.112285 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-5k8pq"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.113204 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.115283 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.141111 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5k8pq"] Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142231 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-startup\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142409 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-reloader\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142440 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mnv\" (UniqueName: \"kubernetes.io/projected/7b16b70e-daf1-4950-994b-b0e166b95215-kube-api-access-g2mnv\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142464 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t657\" (UniqueName: \"kubernetes.io/projected/30798bdf-4846-4408-82f4-b22ba7ec7f84-kube-api-access-5t657\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142589 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-conf\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142665 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.142714 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-sockets\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244324 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-startup\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244373 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244423 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-metrics-certs\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244447 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-metrics-certs\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244480 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdwm\" (UniqueName: \"kubernetes.io/projected/e090cbac-2c8e-44a1-9df3-592d95aa0e66-kube-api-access-gbdwm\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.244589 4756 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244608 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.244655 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs podName:30798bdf-4846-4408-82f4-b22ba7ec7f84 nodeName:}" failed. No retries permitted until 2025-11-24 12:39:48.744632409 +0000 UTC m=+721.102146551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs") pod "frr-k8s-v5pd2" (UID: "30798bdf-4846-4408-82f4-b22ba7ec7f84") : secret "frr-k8s-certs-secret" not found Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54a65742-0318-409a-8a0e-e5c01abe2945-metallb-excludel2\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244763 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-reloader\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244799 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mnv\" (UniqueName: \"kubernetes.io/projected/7b16b70e-daf1-4950-994b-b0e166b95215-kube-api-access-g2mnv\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244820 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t657\" (UniqueName: \"kubernetes.io/projected/30798bdf-4846-4408-82f4-b22ba7ec7f84-kube-api-access-5t657\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244845 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sjh\" (UniqueName: \"kubernetes.io/projected/54a65742-0318-409a-8a0e-e5c01abe2945-kube-api-access-d2sjh\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244866 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-conf\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244888 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244922 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-sockets\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.244990 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-cert\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.245037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.245039 4756 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.245113 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert podName:7b16b70e-daf1-4950-994b-b0e166b95215 nodeName:}" failed. No retries permitted until 2025-11-24 12:39:48.74509599 +0000 UTC m=+721.102610132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert") pod "frr-k8s-webhook-server-6998585d5-2db82" (UID: "7b16b70e-daf1-4950-994b-b0e166b95215") : secret "frr-k8s-webhook-server-cert" not found Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.245223 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-reloader\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.245392 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-sockets\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.245508 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-conf\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.245687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30798bdf-4846-4408-82f4-b22ba7ec7f84-frr-startup\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.264974 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mnv\" (UniqueName: \"kubernetes.io/projected/7b16b70e-daf1-4950-994b-b0e166b95215-kube-api-access-g2mnv\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.281895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t657\" (UniqueName: \"kubernetes.io/projected/30798bdf-4846-4408-82f4-b22ba7ec7f84-kube-api-access-5t657\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-cert\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346770 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-metrics-certs\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-metrics-certs\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346830 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdwm\" (UniqueName: \"kubernetes.io/projected/e090cbac-2c8e-44a1-9df3-592d95aa0e66-kube-api-access-gbdwm\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346871 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54a65742-0318-409a-8a0e-e5c01abe2945-metallb-excludel2\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.346910 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2sjh\" (UniqueName: \"kubernetes.io/projected/54a65742-0318-409a-8a0e-e5c01abe2945-kube-api-access-d2sjh\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.346934 4756 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.347021 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist podName:54a65742-0318-409a-8a0e-e5c01abe2945 nodeName:}" failed. No retries permitted until 2025-11-24 12:39:48.846998928 +0000 UTC m=+721.204513130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist") pod "speaker-r8zzq" (UID: "54a65742-0318-409a-8a0e-e5c01abe2945") : secret "metallb-memberlist" not found Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.347665 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54a65742-0318-409a-8a0e-e5c01abe2945-metallb-excludel2\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.350635 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-metrics-certs\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.353904 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-cert\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.354337 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e090cbac-2c8e-44a1-9df3-592d95aa0e66-metrics-certs\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.362634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2sjh\" (UniqueName: \"kubernetes.io/projected/54a65742-0318-409a-8a0e-e5c01abe2945-kube-api-access-d2sjh\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.362890 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdwm\" (UniqueName: \"kubernetes.io/projected/e090cbac-2c8e-44a1-9df3-592d95aa0e66-kube-api-access-gbdwm\") pod \"controller-6c7b4b5f48-5k8pq\" (UID: \"e090cbac-2c8e-44a1-9df3-592d95aa0e66\") " pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.427426 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.450577 4756 generic.go:334] "Generic (PLEG): container finished" podID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerID="9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624" exitCode=0 Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.450670 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerDied","Data":"9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624"} Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.762539 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.762619 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.773644 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30798bdf-4846-4408-82f4-b22ba7ec7f84-metrics-certs\") pod \"frr-k8s-v5pd2\" (UID: \"30798bdf-4846-4408-82f4-b22ba7ec7f84\") " pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.774054 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b16b70e-daf1-4950-994b-b0e166b95215-cert\") pod \"frr-k8s-webhook-server-6998585d5-2db82\" (UID: \"7b16b70e-daf1-4950-994b-b0e166b95215\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.865966 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.866147 4756 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 12:39:48 crc kubenswrapper[4756]: E1124 12:39:48.866243 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist podName:54a65742-0318-409a-8a0e-e5c01abe2945 nodeName:}" failed. No retries permitted until 2025-11-24 12:39:49.866220731 +0000 UTC m=+722.223734873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist") pod "speaker-r8zzq" (UID: "54a65742-0318-409a-8a0e-e5c01abe2945") : secret "metallb-memberlist" not found Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.878733 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5k8pq"] Nov 24 12:39:48 crc kubenswrapper[4756]: W1124 12:39:48.895095 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode090cbac_2c8e_44a1_9df3_592d95aa0e66.slice/crio-9dd724ba5f26fd133082c40c4de40bc8e53eae4adce2ce22599d1128dd6fef1d WatchSource:0}: Error finding container 9dd724ba5f26fd133082c40c4de40bc8e53eae4adce2ce22599d1128dd6fef1d: Status 404 returned error can't find the container with id 9dd724ba5f26fd133082c40c4de40bc8e53eae4adce2ce22599d1128dd6fef1d Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.940142 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bct29" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.949022 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:39:48 crc kubenswrapper[4756]: I1124 12:39:48.951954 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.388185 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2db82"] Nov 24 12:39:49 crc kubenswrapper[4756]: W1124 12:39:49.392503 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b16b70e_daf1_4950_994b_b0e166b95215.slice/crio-3a2f5289a736399431f78064e46627ca2298fd8e8f1676cbb39c9baad9c27a9e WatchSource:0}: Error finding container 3a2f5289a736399431f78064e46627ca2298fd8e8f1676cbb39c9baad9c27a9e: Status 404 returned error can't find the container with id 3a2f5289a736399431f78064e46627ca2298fd8e8f1676cbb39c9baad9c27a9e Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.459493 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" event={"ID":"7b16b70e-daf1-4950-994b-b0e166b95215","Type":"ContainerStarted","Data":"3a2f5289a736399431f78064e46627ca2298fd8e8f1676cbb39c9baad9c27a9e"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.461738 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerStarted","Data":"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.462659 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"4aeece821d6c2257464b8a877080d94522a49f6e2e2444d5be5f51fc7c36d27a"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.464356 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5k8pq" event={"ID":"e090cbac-2c8e-44a1-9df3-592d95aa0e66","Type":"ContainerStarted","Data":"c5f6609bad8289bac8cadb999479bd3dd000a3fa41a079f27069ce8186c2e18a"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.464410 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5k8pq" event={"ID":"e090cbac-2c8e-44a1-9df3-592d95aa0e66","Type":"ContainerStarted","Data":"bd8c2aa4876a9bab743b7a7dc4310f56f9697d1f760842cebad8bcf42ae514f9"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.464432 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5k8pq" event={"ID":"e090cbac-2c8e-44a1-9df3-592d95aa0e66","Type":"ContainerStarted","Data":"9dd724ba5f26fd133082c40c4de40bc8e53eae4adce2ce22599d1128dd6fef1d"} Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.464472 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.484208 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7cvcn" podStartSLOduration=3.069454911 podStartE2EDuration="5.484187123s" podCreationTimestamp="2025-11-24 12:39:44 +0000 UTC" firstStartedPulling="2025-11-24 12:39:46.440093636 +0000 UTC m=+718.797607818" lastFinishedPulling="2025-11-24 12:39:48.854825888 +0000 UTC m=+721.212340030" observedRunningTime="2025-11-24 12:39:49.478574254 +0000 UTC m=+721.836088416" watchObservedRunningTime="2025-11-24 12:39:49.484187123 +0000 UTC m=+721.841701265" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.503180 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-5k8pq" podStartSLOduration=1.503145844 podStartE2EDuration="1.503145844s" podCreationTimestamp="2025-11-24 12:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:39:49.49856297 +0000 UTC m=+721.856077132" watchObservedRunningTime="2025-11-24 12:39:49.503145844 +0000 UTC m=+721.860659986" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.884680 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.890276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54a65742-0318-409a-8a0e-e5c01abe2945-memberlist\") pod \"speaker-r8zzq\" (UID: \"54a65742-0318-409a-8a0e-e5c01abe2945\") " pod="metallb-system/speaker-r8zzq" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.913737 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-snlnl" Nov 24 12:39:49 crc kubenswrapper[4756]: I1124 12:39:49.922614 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r8zzq" Nov 24 12:39:49 crc kubenswrapper[4756]: W1124 12:39:49.944389 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54a65742_0318_409a_8a0e_e5c01abe2945.slice/crio-575a17110a7636c727660921b54326c06a5cdd986dcdfc2174423ca36d69b63c WatchSource:0}: Error finding container 575a17110a7636c727660921b54326c06a5cdd986dcdfc2174423ca36d69b63c: Status 404 returned error can't find the container with id 575a17110a7636c727660921b54326c06a5cdd986dcdfc2174423ca36d69b63c Nov 24 12:39:50 crc kubenswrapper[4756]: I1124 12:39:50.472265 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r8zzq" event={"ID":"54a65742-0318-409a-8a0e-e5c01abe2945","Type":"ContainerStarted","Data":"9510a4668bb7c5e4989869009c1e4127fdf4a3e5f89ea83cb276a965658e7c7c"} Nov 24 12:39:50 crc kubenswrapper[4756]: I1124 12:39:50.472630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r8zzq" event={"ID":"54a65742-0318-409a-8a0e-e5c01abe2945","Type":"ContainerStarted","Data":"e5b69626a1282a4fae96d955ecb0e26f397ae95c021316ab5c5dd1e41b4d6fde"} Nov 24 12:39:50 crc kubenswrapper[4756]: I1124 12:39:50.472645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r8zzq" event={"ID":"54a65742-0318-409a-8a0e-e5c01abe2945","Type":"ContainerStarted","Data":"575a17110a7636c727660921b54326c06a5cdd986dcdfc2174423ca36d69b63c"} Nov 24 12:39:50 crc kubenswrapper[4756]: I1124 12:39:50.472811 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-r8zzq" Nov 24 12:39:50 crc kubenswrapper[4756]: I1124 12:39:50.499918 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-r8zzq" podStartSLOduration=2.499896003 podStartE2EDuration="2.499896003s" podCreationTimestamp="2025-11-24 12:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:39:50.497076503 +0000 UTC m=+722.854590655" watchObservedRunningTime="2025-11-24 12:39:50.499896003 +0000 UTC m=+722.857410145" Nov 24 12:39:54 crc kubenswrapper[4756]: I1124 12:39:54.960457 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:54 crc kubenswrapper[4756]: I1124 12:39:54.961350 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:55 crc kubenswrapper[4756]: I1124 12:39:55.016476 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:55 crc kubenswrapper[4756]: I1124 12:39:55.550510 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:55 crc kubenswrapper[4756]: I1124 12:39:55.594100 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.524151 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" event={"ID":"7b16b70e-daf1-4950-994b-b0e166b95215","Type":"ContainerStarted","Data":"b96cec3b16e118e8f77c1af712d6a691d0255673ca730c9bdc2a004101659153"} Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.525280 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.527125 4756 generic.go:334] "Generic (PLEG): container finished" podID="30798bdf-4846-4408-82f4-b22ba7ec7f84" containerID="065e87cb0a48166b2f8d06f0077db6eff1c3157f7fd3ffac362b8d7dc31a98dd" exitCode=0 Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.527158 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerDied","Data":"065e87cb0a48166b2f8d06f0077db6eff1c3157f7fd3ffac362b8d7dc31a98dd"} Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.527321 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7cvcn" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="registry-server" containerID="cri-o://09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548" gracePeriod=2 Nov 24 12:39:57 crc kubenswrapper[4756]: I1124 12:39:57.543241 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" podStartSLOduration=1.8624706789999999 podStartE2EDuration="9.543227775s" podCreationTimestamp="2025-11-24 12:39:48 +0000 UTC" firstStartedPulling="2025-11-24 12:39:49.394766555 +0000 UTC m=+721.752280697" lastFinishedPulling="2025-11-24 12:39:57.075523651 +0000 UTC m=+729.433037793" observedRunningTime="2025-11-24 12:39:57.540879617 +0000 UTC m=+729.898393759" watchObservedRunningTime="2025-11-24 12:39:57.543227775 +0000 UTC m=+729.900741917" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.010519 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.098242 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content\") pod \"a0a9d290-cff2-40ae-b516-d301b45382f6\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.098349 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9zg4\" (UniqueName: \"kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4\") pod \"a0a9d290-cff2-40ae-b516-d301b45382f6\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.098394 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities\") pod \"a0a9d290-cff2-40ae-b516-d301b45382f6\" (UID: \"a0a9d290-cff2-40ae-b516-d301b45382f6\") " Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.099498 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities" (OuterVolumeSpecName: "utilities") pod "a0a9d290-cff2-40ae-b516-d301b45382f6" (UID: "a0a9d290-cff2-40ae-b516-d301b45382f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.104105 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4" (OuterVolumeSpecName: "kube-api-access-x9zg4") pod "a0a9d290-cff2-40ae-b516-d301b45382f6" (UID: "a0a9d290-cff2-40ae-b516-d301b45382f6"). InnerVolumeSpecName "kube-api-access-x9zg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.145737 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0a9d290-cff2-40ae-b516-d301b45382f6" (UID: "a0a9d290-cff2-40ae-b516-d301b45382f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.199967 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9zg4\" (UniqueName: \"kubernetes.io/projected/a0a9d290-cff2-40ae-b516-d301b45382f6-kube-api-access-x9zg4\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.200005 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.200015 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a9d290-cff2-40ae-b516-d301b45382f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.536084 4756 generic.go:334] "Generic (PLEG): container finished" podID="30798bdf-4846-4408-82f4-b22ba7ec7f84" containerID="0ddc2ca272afd61ebdc7851e211c5eaa6847918f5e06f06a5dd0d43eb26f107e" exitCode=0 Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.536137 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerDied","Data":"0ddc2ca272afd61ebdc7851e211c5eaa6847918f5e06f06a5dd0d43eb26f107e"} Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.538972 4756 generic.go:334] "Generic (PLEG): container finished" podID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerID="09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548" exitCode=0 Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.539041 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerDied","Data":"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548"} Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.539078 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cvcn" event={"ID":"a0a9d290-cff2-40ae-b516-d301b45382f6","Type":"ContainerDied","Data":"fd79f7f2cbf143f9e01793eac048d07b327343ffca9a1f4493114e05a663dd91"} Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.539090 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cvcn" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.539099 4756 scope.go:117] "RemoveContainer" containerID="09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.557665 4756 scope.go:117] "RemoveContainer" containerID="9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.589134 4756 scope.go:117] "RemoveContainer" containerID="0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.607699 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.614097 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7cvcn"] Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.634891 4756 scope.go:117] "RemoveContainer" containerID="09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548" Nov 24 12:39:58 crc kubenswrapper[4756]: E1124 12:39:58.636428 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548\": container with ID starting with 09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548 not found: ID does not exist" containerID="09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.636478 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548"} err="failed to get container status \"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548\": rpc error: code = NotFound desc = could not find container \"09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548\": container with ID starting with 09def0a38c77a9fb84c955342ce3ae4ad98d1ea1b9c45e830c5b1886d48db548 not found: ID does not exist" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.636508 4756 scope.go:117] "RemoveContainer" containerID="9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624" Nov 24 12:39:58 crc kubenswrapper[4756]: E1124 12:39:58.637026 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624\": container with ID starting with 9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624 not found: ID does not exist" containerID="9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.637063 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624"} err="failed to get container status \"9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624\": rpc error: code = NotFound desc = could not find container \"9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624\": container with ID starting with 9b3ecf7ff07d4b3a4f1facac4d4d807085496a29deb6eb30d64d4ce52aed6624 not found: ID does not exist" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.637091 4756 scope.go:117] "RemoveContainer" containerID="0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6" Nov 24 12:39:58 crc kubenswrapper[4756]: E1124 12:39:58.637506 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6\": container with ID starting with 0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6 not found: ID does not exist" containerID="0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6" Nov 24 12:39:58 crc kubenswrapper[4756]: I1124 12:39:58.637530 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6"} err="failed to get container status \"0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6\": rpc error: code = NotFound desc = could not find container \"0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6\": container with ID starting with 0dd7258c7f33daed42e14ab2788b84e55d11de8c199d57b2321c46ddebddd5a6 not found: ID does not exist" Nov 24 12:39:59 crc kubenswrapper[4756]: I1124 12:39:59.559311 4756 generic.go:334] "Generic (PLEG): container finished" podID="30798bdf-4846-4408-82f4-b22ba7ec7f84" containerID="f404162c03b3381e76bee8fb5c42c335135488f47a497df3c4a032a235a36a1d" exitCode=0 Nov 24 12:39:59 crc kubenswrapper[4756]: I1124 12:39:59.559419 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerDied","Data":"f404162c03b3381e76bee8fb5c42c335135488f47a497df3c4a032a235a36a1d"} Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.488710 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" path="/var/lib/kubelet/pods/a0a9d290-cff2-40ae-b516-d301b45382f6/volumes" Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.588056 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"b4ccb1bc6e18761a053d8f485356691ac2177057847b05cdaa53bfcf85c99eb4"} Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.588126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"8936028c41795152d4fcfa7e824735687b94cf488bf5386e07bd183bfedc5b7c"} Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.588145 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"4e468851ace8f8ab45f52f2a7e9a763d1aa1723054242c9e44e59ecb204a2731"} Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.588189 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"f3c5f64cf21d7c2733ce74a0d09947bd01513a69586a3b45fb3a6cab4f2add83"} Nov 24 12:40:00 crc kubenswrapper[4756]: I1124 12:40:00.588209 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"28fb918e8e0c6025fc4273b0455a3d1998cf08b660658ed23122a2d9a2d3b5a8"} Nov 24 12:40:01 crc kubenswrapper[4756]: I1124 12:40:01.598996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v5pd2" event={"ID":"30798bdf-4846-4408-82f4-b22ba7ec7f84","Type":"ContainerStarted","Data":"0a7a5643c9823fb258473ac3c72192e7c214c432a62283e1f5b76a8cfe4c1d80"} Nov 24 12:40:01 crc kubenswrapper[4756]: I1124 12:40:01.599189 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:40:01 crc kubenswrapper[4756]: I1124 12:40:01.626143 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-v5pd2" podStartSLOduration=6.64875891 podStartE2EDuration="14.626112475s" podCreationTimestamp="2025-11-24 12:39:47 +0000 UTC" firstStartedPulling="2025-11-24 12:39:49.083798279 +0000 UTC m=+721.441312421" lastFinishedPulling="2025-11-24 12:39:57.061151844 +0000 UTC m=+729.418665986" observedRunningTime="2025-11-24 12:40:01.619280535 +0000 UTC m=+733.976794727" watchObservedRunningTime="2025-11-24 12:40:01.626112475 +0000 UTC m=+733.983626627" Nov 24 12:40:03 crc kubenswrapper[4756]: I1124 12:40:03.478863 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:40:03 crc kubenswrapper[4756]: I1124 12:40:03.478931 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:40:03 crc kubenswrapper[4756]: I1124 12:40:03.949639 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:40:03 crc kubenswrapper[4756]: I1124 12:40:03.985471 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:40:08 crc kubenswrapper[4756]: I1124 12:40:08.433549 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-5k8pq" Nov 24 12:40:08 crc kubenswrapper[4756]: I1124 12:40:08.993362 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2db82" Nov 24 12:40:09 crc kubenswrapper[4756]: I1124 12:40:09.927937 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-r8zzq" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.123103 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:13 crc kubenswrapper[4756]: E1124 12:40:13.123735 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="extract-content" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.123749 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="extract-content" Nov 24 12:40:13 crc kubenswrapper[4756]: E1124 12:40:13.123771 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="registry-server" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.123777 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="registry-server" Nov 24 12:40:13 crc kubenswrapper[4756]: E1124 12:40:13.123795 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="extract-utilities" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.123803 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="extract-utilities" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.123928 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a9d290-cff2-40ae-b516-d301b45382f6" containerName="registry-server" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.124428 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.136637 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.136993 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.137381 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5p8cz" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.159183 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.202720 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkv8\" (UniqueName: \"kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8\") pod \"openstack-operator-index-bpqpb\" (UID: \"65b7abd8-cde2-4d17-8a31-b0cccdd156e0\") " pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.304564 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdkv8\" (UniqueName: \"kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8\") pod \"openstack-operator-index-bpqpb\" (UID: \"65b7abd8-cde2-4d17-8a31-b0cccdd156e0\") " pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.326509 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdkv8\" (UniqueName: \"kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8\") pod \"openstack-operator-index-bpqpb\" (UID: \"65b7abd8-cde2-4d17-8a31-b0cccdd156e0\") " pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.460210 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:13 crc kubenswrapper[4756]: I1124 12:40:13.865421 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:13 crc kubenswrapper[4756]: W1124 12:40:13.871066 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65b7abd8_cde2_4d17_8a31_b0cccdd156e0.slice/crio-53e5c6b22e28b1e30fc83d86c436cab5e86cd8fbafa2b7dbaf986fb4db6661cb WatchSource:0}: Error finding container 53e5c6b22e28b1e30fc83d86c436cab5e86cd8fbafa2b7dbaf986fb4db6661cb: Status 404 returned error can't find the container with id 53e5c6b22e28b1e30fc83d86c436cab5e86cd8fbafa2b7dbaf986fb4db6661cb Nov 24 12:40:14 crc kubenswrapper[4756]: I1124 12:40:14.691370 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpqpb" event={"ID":"65b7abd8-cde2-4d17-8a31-b0cccdd156e0","Type":"ContainerStarted","Data":"53e5c6b22e28b1e30fc83d86c436cab5e86cd8fbafa2b7dbaf986fb4db6661cb"} Nov 24 12:40:16 crc kubenswrapper[4756]: I1124 12:40:16.702422 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:16 crc kubenswrapper[4756]: I1124 12:40:16.709976 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpqpb" event={"ID":"65b7abd8-cde2-4d17-8a31-b0cccdd156e0","Type":"ContainerStarted","Data":"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa"} Nov 24 12:40:16 crc kubenswrapper[4756]: I1124 12:40:16.735548 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bpqpb" podStartSLOduration=1.871396793 podStartE2EDuration="3.735516623s" podCreationTimestamp="2025-11-24 12:40:13 +0000 UTC" firstStartedPulling="2025-11-24 12:40:13.874917389 +0000 UTC m=+746.232431531" lastFinishedPulling="2025-11-24 12:40:15.739037219 +0000 UTC m=+748.096551361" observedRunningTime="2025-11-24 12:40:16.732640471 +0000 UTC m=+749.090154623" watchObservedRunningTime="2025-11-24 12:40:16.735516623 +0000 UTC m=+749.093030805" Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.507324 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7jgvj"] Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.508373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.518314 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7jgvj"] Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.567353 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz45\" (UniqueName: \"kubernetes.io/projected/bc62b86b-8abe-4660-b5dd-e80f36962d0a-kube-api-access-8fz45\") pod \"openstack-operator-index-7jgvj\" (UID: \"bc62b86b-8abe-4660-b5dd-e80f36962d0a\") " pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.668129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fz45\" (UniqueName: \"kubernetes.io/projected/bc62b86b-8abe-4660-b5dd-e80f36962d0a-kube-api-access-8fz45\") pod \"openstack-operator-index-7jgvj\" (UID: \"bc62b86b-8abe-4660-b5dd-e80f36962d0a\") " pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.701584 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fz45\" (UniqueName: \"kubernetes.io/projected/bc62b86b-8abe-4660-b5dd-e80f36962d0a-kube-api-access-8fz45\") pod \"openstack-operator-index-7jgvj\" (UID: \"bc62b86b-8abe-4660-b5dd-e80f36962d0a\") " pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.718731 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bpqpb" podUID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" containerName="registry-server" containerID="cri-o://9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa" gracePeriod=2 Nov 24 12:40:17 crc kubenswrapper[4756]: I1124 12:40:17.830767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.154550 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.281372 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdkv8\" (UniqueName: \"kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8\") pod \"65b7abd8-cde2-4d17-8a31-b0cccdd156e0\" (UID: \"65b7abd8-cde2-4d17-8a31-b0cccdd156e0\") " Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.287632 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8" (OuterVolumeSpecName: "kube-api-access-zdkv8") pod "65b7abd8-cde2-4d17-8a31-b0cccdd156e0" (UID: "65b7abd8-cde2-4d17-8a31-b0cccdd156e0"). InnerVolumeSpecName "kube-api-access-zdkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.313523 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7jgvj"] Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.383195 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdkv8\" (UniqueName: \"kubernetes.io/projected/65b7abd8-cde2-4d17-8a31-b0cccdd156e0-kube-api-access-zdkv8\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.726785 4756 generic.go:334] "Generic (PLEG): container finished" podID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" containerID="9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa" exitCode=0 Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.726870 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bpqpb" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.726887 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpqpb" event={"ID":"65b7abd8-cde2-4d17-8a31-b0cccdd156e0","Type":"ContainerDied","Data":"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa"} Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.726950 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bpqpb" event={"ID":"65b7abd8-cde2-4d17-8a31-b0cccdd156e0","Type":"ContainerDied","Data":"53e5c6b22e28b1e30fc83d86c436cab5e86cd8fbafa2b7dbaf986fb4db6661cb"} Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.726971 4756 scope.go:117] "RemoveContainer" containerID="9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.729535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7jgvj" event={"ID":"bc62b86b-8abe-4660-b5dd-e80f36962d0a","Type":"ContainerStarted","Data":"9008508ccfdda863ed29b4da8b99bfac03a369d5d1fcd28fb6bb0aa0aaba0a05"} Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.729576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7jgvj" event={"ID":"bc62b86b-8abe-4660-b5dd-e80f36962d0a","Type":"ContainerStarted","Data":"7bc4a7faae39ccd58243435b59b5b2845e56b040e557e9834f3b43651639bec6"} Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.750056 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7jgvj" podStartSLOduration=1.71233142 podStartE2EDuration="1.750035874s" podCreationTimestamp="2025-11-24 12:40:17 +0000 UTC" firstStartedPulling="2025-11-24 12:40:18.321584855 +0000 UTC m=+750.679098987" lastFinishedPulling="2025-11-24 12:40:18.359289299 +0000 UTC m=+750.716803441" observedRunningTime="2025-11-24 12:40:18.746616149 +0000 UTC m=+751.104130301" watchObservedRunningTime="2025-11-24 12:40:18.750035874 +0000 UTC m=+751.107550016" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.763461 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.763700 4756 scope.go:117] "RemoveContainer" containerID="9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa" Nov 24 12:40:18 crc kubenswrapper[4756]: E1124 12:40:18.764362 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa\": container with ID starting with 9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa not found: ID does not exist" containerID="9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.764408 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa"} err="failed to get container status \"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa\": rpc error: code = NotFound desc = could not find container \"9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa\": container with ID starting with 9552ce49f2eddd1bd8b74361f06b6c6afdad1d4a8de1b8efa841699e0fe7aafa not found: ID does not exist" Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.767283 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bpqpb"] Nov 24 12:40:18 crc kubenswrapper[4756]: I1124 12:40:18.953325 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-v5pd2" Nov 24 12:40:20 crc kubenswrapper[4756]: I1124 12:40:20.503631 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" path="/var/lib/kubelet/pods/65b7abd8-cde2-4d17-8a31-b0cccdd156e0/volumes" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.727134 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:21 crc kubenswrapper[4756]: E1124 12:40:21.728477 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" containerName="registry-server" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.728621 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" containerName="registry-server" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.729121 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b7abd8-cde2-4d17-8a31-b0cccdd156e0" containerName="registry-server" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.734834 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.741065 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.831176 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.831487 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2nz8\" (UniqueName: \"kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.831782 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.933728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.933849 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.933877 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2nz8\" (UniqueName: \"kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.934955 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.935178 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:21 crc kubenswrapper[4756]: I1124 12:40:21.956043 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2nz8\" (UniqueName: \"kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8\") pod \"community-operators-72x2w\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:22 crc kubenswrapper[4756]: I1124 12:40:22.058904 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:22 crc kubenswrapper[4756]: I1124 12:40:22.539556 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:22 crc kubenswrapper[4756]: W1124 12:40:22.545746 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b1c2097_4bf9_40c1_bd98_e13daf31f394.slice/crio-a979f5ed67d68bdc65ea7bd22518f29c7809c020d1d730f5dcc7f87b4f294fb6 WatchSource:0}: Error finding container a979f5ed67d68bdc65ea7bd22518f29c7809c020d1d730f5dcc7f87b4f294fb6: Status 404 returned error can't find the container with id a979f5ed67d68bdc65ea7bd22518f29c7809c020d1d730f5dcc7f87b4f294fb6 Nov 24 12:40:22 crc kubenswrapper[4756]: I1124 12:40:22.760961 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerID="e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9" exitCode=0 Nov 24 12:40:22 crc kubenswrapper[4756]: I1124 12:40:22.761005 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerDied","Data":"e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9"} Nov 24 12:40:22 crc kubenswrapper[4756]: I1124 12:40:22.761079 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerStarted","Data":"a979f5ed67d68bdc65ea7bd22518f29c7809c020d1d730f5dcc7f87b4f294fb6"} Nov 24 12:40:23 crc kubenswrapper[4756]: I1124 12:40:23.770146 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerID="26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11" exitCode=0 Nov 24 12:40:23 crc kubenswrapper[4756]: I1124 12:40:23.770263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerDied","Data":"26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11"} Nov 24 12:40:24 crc kubenswrapper[4756]: I1124 12:40:24.789392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerStarted","Data":"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e"} Nov 24 12:40:24 crc kubenswrapper[4756]: I1124 12:40:24.813887 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-72x2w" podStartSLOduration=2.078527288 podStartE2EDuration="3.813869834s" podCreationTimestamp="2025-11-24 12:40:21 +0000 UTC" firstStartedPulling="2025-11-24 12:40:22.76303412 +0000 UTC m=+755.120548262" lastFinishedPulling="2025-11-24 12:40:24.498376666 +0000 UTC m=+756.855890808" observedRunningTime="2025-11-24 12:40:24.810788257 +0000 UTC m=+757.168302399" watchObservedRunningTime="2025-11-24 12:40:24.813869834 +0000 UTC m=+757.171383976" Nov 24 12:40:27 crc kubenswrapper[4756]: I1124 12:40:27.831364 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:27 crc kubenswrapper[4756]: I1124 12:40:27.831429 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:27 crc kubenswrapper[4756]: I1124 12:40:27.881358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:28 crc kubenswrapper[4756]: I1124 12:40:28.854113 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7jgvj" Nov 24 12:40:32 crc kubenswrapper[4756]: I1124 12:40:32.059728 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:32 crc kubenswrapper[4756]: I1124 12:40:32.060225 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:32 crc kubenswrapper[4756]: I1124 12:40:32.112305 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:32 crc kubenswrapper[4756]: I1124 12:40:32.881723 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:33 crc kubenswrapper[4756]: I1124 12:40:33.478890 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:40:33 crc kubenswrapper[4756]: I1124 12:40:33.479199 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:40:34 crc kubenswrapper[4756]: I1124 12:40:34.101746 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:34 crc kubenswrapper[4756]: I1124 12:40:34.856467 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-72x2w" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="registry-server" containerID="cri-o://08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e" gracePeriod=2 Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.253682 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.315613 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content\") pod \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.315683 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities\") pod \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.315761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2nz8\" (UniqueName: \"kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8\") pod \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\" (UID: \"3b1c2097-4bf9-40c1-bd98-e13daf31f394\") " Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.318185 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities" (OuterVolumeSpecName: "utilities") pod "3b1c2097-4bf9-40c1-bd98-e13daf31f394" (UID: "3b1c2097-4bf9-40c1-bd98-e13daf31f394"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.321989 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8" (OuterVolumeSpecName: "kube-api-access-c2nz8") pod "3b1c2097-4bf9-40c1-bd98-e13daf31f394" (UID: "3b1c2097-4bf9-40c1-bd98-e13daf31f394"). InnerVolumeSpecName "kube-api-access-c2nz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.373065 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b1c2097-4bf9-40c1-bd98-e13daf31f394" (UID: "3b1c2097-4bf9-40c1-bd98-e13daf31f394"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.417861 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.417895 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1c2097-4bf9-40c1-bd98-e13daf31f394-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.417906 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2nz8\" (UniqueName: \"kubernetes.io/projected/3b1c2097-4bf9-40c1-bd98-e13daf31f394-kube-api-access-c2nz8\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.867378 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerID="08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e" exitCode=0 Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.867471 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-72x2w" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.867453 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerDied","Data":"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e"} Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.868927 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-72x2w" event={"ID":"3b1c2097-4bf9-40c1-bd98-e13daf31f394","Type":"ContainerDied","Data":"a979f5ed67d68bdc65ea7bd22518f29c7809c020d1d730f5dcc7f87b4f294fb6"} Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.869018 4756 scope.go:117] "RemoveContainer" containerID="08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.897826 4756 scope.go:117] "RemoveContainer" containerID="26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.911785 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.920606 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-72x2w"] Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.935334 4756 scope.go:117] "RemoveContainer" containerID="e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.957326 4756 scope.go:117] "RemoveContainer" containerID="08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e" Nov 24 12:40:35 crc kubenswrapper[4756]: E1124 12:40:35.957815 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e\": container with ID starting with 08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e not found: ID does not exist" containerID="08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.957859 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e"} err="failed to get container status \"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e\": rpc error: code = NotFound desc = could not find container \"08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e\": container with ID starting with 08f011be455341199b5650618457893600e112d57e46ec8c2bd0fcdb7794ee1e not found: ID does not exist" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.957886 4756 scope.go:117] "RemoveContainer" containerID="26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11" Nov 24 12:40:35 crc kubenswrapper[4756]: E1124 12:40:35.958112 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11\": container with ID starting with 26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11 not found: ID does not exist" containerID="26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.958132 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11"} err="failed to get container status \"26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11\": rpc error: code = NotFound desc = could not find container \"26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11\": container with ID starting with 26c8c1e76d52ce240f7875cdf615239269bdcef383fef1d258a6748dcfc32f11 not found: ID does not exist" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.958150 4756 scope.go:117] "RemoveContainer" containerID="e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9" Nov 24 12:40:35 crc kubenswrapper[4756]: E1124 12:40:35.958393 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9\": container with ID starting with e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9 not found: ID does not exist" containerID="e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9" Nov 24 12:40:35 crc kubenswrapper[4756]: I1124 12:40:35.958427 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9"} err="failed to get container status \"e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9\": rpc error: code = NotFound desc = could not find container \"e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9\": container with ID starting with e06f6d690dd8b3ee77d6015bee01aac3f51660ca76b2e3f1021252285bf42ec9 not found: ID does not exist" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.138355 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q"] Nov 24 12:40:36 crc kubenswrapper[4756]: E1124 12:40:36.138585 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="extract-utilities" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.138596 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="extract-utilities" Nov 24 12:40:36 crc kubenswrapper[4756]: E1124 12:40:36.138606 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="extract-content" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.138612 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="extract-content" Nov 24 12:40:36 crc kubenswrapper[4756]: E1124 12:40:36.138622 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="registry-server" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.138628 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="registry-server" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.138750 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" containerName="registry-server" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.139516 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.142529 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ms8cs" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.155333 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q"] Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.230310 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.230678 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.230771 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlf2v\" (UniqueName: \"kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.332211 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.332253 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.332272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlf2v\" (UniqueName: \"kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.332688 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.332832 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.360742 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlf2v\" (UniqueName: \"kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v\") pod \"5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.458180 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.483583 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1c2097-4bf9-40c1-bd98-e13daf31f394" path="/var/lib/kubelet/pods/3b1c2097-4bf9-40c1-bd98-e13daf31f394/volumes" Nov 24 12:40:36 crc kubenswrapper[4756]: I1124 12:40:36.882938 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q"] Nov 24 12:40:36 crc kubenswrapper[4756]: W1124 12:40:36.892472 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod688f1e7d_e519_4f20_acae_7b329d42da9b.slice/crio-6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c WatchSource:0}: Error finding container 6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c: Status 404 returned error can't find the container with id 6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c Nov 24 12:40:37 crc kubenswrapper[4756]: I1124 12:40:37.910919 4756 generic.go:334] "Generic (PLEG): container finished" podID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerID="97747669b67d95adf00e7edc96bcc2e0357aa9fc3906f3c9b5c861fff292682e" exitCode=0 Nov 24 12:40:37 crc kubenswrapper[4756]: I1124 12:40:37.910990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" event={"ID":"688f1e7d-e519-4f20-acae-7b329d42da9b","Type":"ContainerDied","Data":"97747669b67d95adf00e7edc96bcc2e0357aa9fc3906f3c9b5c861fff292682e"} Nov 24 12:40:37 crc kubenswrapper[4756]: I1124 12:40:37.911031 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" event={"ID":"688f1e7d-e519-4f20-acae-7b329d42da9b","Type":"ContainerStarted","Data":"6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c"} Nov 24 12:40:38 crc kubenswrapper[4756]: I1124 12:40:38.918503 4756 generic.go:334] "Generic (PLEG): container finished" podID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerID="14146ab3e4e57a28b5fd7a932dca93fb6d6b3b5ecc918bf2d8664c565d1927f2" exitCode=0 Nov 24 12:40:38 crc kubenswrapper[4756]: I1124 12:40:38.918574 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" event={"ID":"688f1e7d-e519-4f20-acae-7b329d42da9b","Type":"ContainerDied","Data":"14146ab3e4e57a28b5fd7a932dca93fb6d6b3b5ecc918bf2d8664c565d1927f2"} Nov 24 12:40:39 crc kubenswrapper[4756]: I1124 12:40:39.927492 4756 generic.go:334] "Generic (PLEG): container finished" podID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerID="0e8a82524fda2e114077bc01c320192b82a6359b72f06de85ccbca67af62dfa2" exitCode=0 Nov 24 12:40:39 crc kubenswrapper[4756]: I1124 12:40:39.927553 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" event={"ID":"688f1e7d-e519-4f20-acae-7b329d42da9b","Type":"ContainerDied","Data":"0e8a82524fda2e114077bc01c320192b82a6359b72f06de85ccbca67af62dfa2"} Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.186849 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.300593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle\") pod \"688f1e7d-e519-4f20-acae-7b329d42da9b\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.300676 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util\") pod \"688f1e7d-e519-4f20-acae-7b329d42da9b\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.300764 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlf2v\" (UniqueName: \"kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v\") pod \"688f1e7d-e519-4f20-acae-7b329d42da9b\" (UID: \"688f1e7d-e519-4f20-acae-7b329d42da9b\") " Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.301477 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle" (OuterVolumeSpecName: "bundle") pod "688f1e7d-e519-4f20-acae-7b329d42da9b" (UID: "688f1e7d-e519-4f20-acae-7b329d42da9b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.309410 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v" (OuterVolumeSpecName: "kube-api-access-vlf2v") pod "688f1e7d-e519-4f20-acae-7b329d42da9b" (UID: "688f1e7d-e519-4f20-acae-7b329d42da9b"). InnerVolumeSpecName "kube-api-access-vlf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.339740 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util" (OuterVolumeSpecName: "util") pod "688f1e7d-e519-4f20-acae-7b329d42da9b" (UID: "688f1e7d-e519-4f20-acae-7b329d42da9b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.402464 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.402503 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/688f1e7d-e519-4f20-acae-7b329d42da9b-util\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.402515 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlf2v\" (UniqueName: \"kubernetes.io/projected/688f1e7d-e519-4f20-acae-7b329d42da9b-kube-api-access-vlf2v\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.953823 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" event={"ID":"688f1e7d-e519-4f20-acae-7b329d42da9b","Type":"ContainerDied","Data":"6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c"} Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.953883 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a66baf378f8bd1ad072eb49883916e72bc4b127ead8aec739738acf54db068c" Nov 24 12:40:41 crc kubenswrapper[4756]: I1124 12:40:41.953969 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.312221 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:40:44 crc kubenswrapper[4756]: E1124 12:40:44.312919 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="extract" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.312939 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="extract" Nov 24 12:40:44 crc kubenswrapper[4756]: E1124 12:40:44.312962 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="pull" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.312975 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="pull" Nov 24 12:40:44 crc kubenswrapper[4756]: E1124 12:40:44.313014 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="util" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.313026 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="util" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.313259 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="688f1e7d-e519-4f20-acae-7b329d42da9b" containerName="extract" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.314906 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.322227 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.446808 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.446880 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.446942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpd7n\" (UniqueName: \"kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.548172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.548245 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.548282 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpd7n\" (UniqueName: \"kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.548619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.548727 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.570086 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpd7n\" (UniqueName: \"kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n\") pod \"redhat-operators-jn6jv\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:44 crc kubenswrapper[4756]: I1124 12:40:44.631759 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:45 crc kubenswrapper[4756]: I1124 12:40:45.132425 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:40:45 crc kubenswrapper[4756]: I1124 12:40:45.981002 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7b5579a-3484-446d-84be-2eb829811967" containerID="3fdd00d481e9ea7d0727617a7e954fa5f6f543dd3a0d37fefe699475fd748a18" exitCode=0 Nov 24 12:40:45 crc kubenswrapper[4756]: I1124 12:40:45.981052 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerDied","Data":"3fdd00d481e9ea7d0727617a7e954fa5f6f543dd3a0d37fefe699475fd748a18"} Nov 24 12:40:45 crc kubenswrapper[4756]: I1124 12:40:45.981389 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerStarted","Data":"3080f8f65acc84d21ca7bf7e52ff81193bfb13fddb153a91e4fdcbaf02f07c61"} Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.416351 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-666974d685-8lth8"] Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.417578 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.423016 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-q6d4m" Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.442985 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-666974d685-8lth8"] Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.496036 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894gb\" (UniqueName: \"kubernetes.io/projected/0e481139-e850-4597-b98b-0a2aa8b1add9-kube-api-access-894gb\") pod \"openstack-operator-controller-operator-666974d685-8lth8\" (UID: \"0e481139-e850-4597-b98b-0a2aa8b1add9\") " pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.597967 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894gb\" (UniqueName: \"kubernetes.io/projected/0e481139-e850-4597-b98b-0a2aa8b1add9-kube-api-access-894gb\") pod \"openstack-operator-controller-operator-666974d685-8lth8\" (UID: \"0e481139-e850-4597-b98b-0a2aa8b1add9\") " pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.618712 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894gb\" (UniqueName: \"kubernetes.io/projected/0e481139-e850-4597-b98b-0a2aa8b1add9-kube-api-access-894gb\") pod \"openstack-operator-controller-operator-666974d685-8lth8\" (UID: \"0e481139-e850-4597-b98b-0a2aa8b1add9\") " pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:47 crc kubenswrapper[4756]: I1124 12:40:47.736368 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:48 crc kubenswrapper[4756]: I1124 12:40:48.046215 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7b5579a-3484-446d-84be-2eb829811967" containerID="7fd036b3591765b2035ccd54797dd917ab3221ab5667330524e8bbb8c739fdb0" exitCode=0 Nov 24 12:40:48 crc kubenswrapper[4756]: I1124 12:40:48.046368 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerDied","Data":"7fd036b3591765b2035ccd54797dd917ab3221ab5667330524e8bbb8c739fdb0"} Nov 24 12:40:48 crc kubenswrapper[4756]: I1124 12:40:48.159516 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-666974d685-8lth8"] Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.055951 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" event={"ID":"0e481139-e850-4597-b98b-0a2aa8b1add9","Type":"ContainerStarted","Data":"a8686473e3b8353c404ac8b60b82905437c0da2a6bb1ca4015149b430c8c1be1"} Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.060042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerStarted","Data":"485793ec046b6776f96cbc23398e8283a3de1df826aafeadb8b93897484e9595"} Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.077808 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jn6jv" podStartSLOduration=2.515942741 podStartE2EDuration="5.077790863s" podCreationTimestamp="2025-11-24 12:40:44 +0000 UTC" firstStartedPulling="2025-11-24 12:40:45.982559548 +0000 UTC m=+778.340073690" lastFinishedPulling="2025-11-24 12:40:48.54440768 +0000 UTC m=+780.901921812" observedRunningTime="2025-11-24 12:40:49.075972688 +0000 UTC m=+781.433486840" watchObservedRunningTime="2025-11-24 12:40:49.077790863 +0000 UTC m=+781.435305005" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.515995 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.518593 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.520561 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.631757 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.631822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.631863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl9dk\" (UniqueName: \"kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.733292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.733360 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.733397 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl9dk\" (UniqueName: \"kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.734281 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.734348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.752392 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl9dk\" (UniqueName: \"kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk\") pod \"redhat-marketplace-nbqfv\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:49 crc kubenswrapper[4756]: I1124 12:40:49.835478 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:52 crc kubenswrapper[4756]: I1124 12:40:52.681722 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:40:53 crc kubenswrapper[4756]: I1124 12:40:53.105554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" event={"ID":"0e481139-e850-4597-b98b-0a2aa8b1add9","Type":"ContainerStarted","Data":"b88e48d449ca011c66387c9ee2b1f78347d9f2937dadab13dfc7587a25043f9d"} Nov 24 12:40:53 crc kubenswrapper[4756]: I1124 12:40:53.105790 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:53 crc kubenswrapper[4756]: I1124 12:40:53.110829 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerStarted","Data":"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b"} Nov 24 12:40:53 crc kubenswrapper[4756]: I1124 12:40:53.111134 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerStarted","Data":"67ce27ff90376b550bdd0fa96fdb5f0b8f81ea0737bfe2fe407ca869280f757f"} Nov 24 12:40:53 crc kubenswrapper[4756]: I1124 12:40:53.139944 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" podStartSLOduration=2.000862804 podStartE2EDuration="6.139923568s" podCreationTimestamp="2025-11-24 12:40:47 +0000 UTC" firstStartedPulling="2025-11-24 12:40:48.165531389 +0000 UTC m=+780.523045531" lastFinishedPulling="2025-11-24 12:40:52.304592153 +0000 UTC m=+784.662106295" observedRunningTime="2025-11-24 12:40:53.136298118 +0000 UTC m=+785.493812280" watchObservedRunningTime="2025-11-24 12:40:53.139923568 +0000 UTC m=+785.497437710" Nov 24 12:40:54 crc kubenswrapper[4756]: I1124 12:40:54.118057 4756 generic.go:334] "Generic (PLEG): container finished" podID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerID="30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b" exitCode=0 Nov 24 12:40:54 crc kubenswrapper[4756]: I1124 12:40:54.118417 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerDied","Data":"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b"} Nov 24 12:40:54 crc kubenswrapper[4756]: I1124 12:40:54.631949 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:54 crc kubenswrapper[4756]: I1124 12:40:54.632002 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:54 crc kubenswrapper[4756]: I1124 12:40:54.686551 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:55 crc kubenswrapper[4756]: I1124 12:40:55.128342 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerStarted","Data":"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3"} Nov 24 12:40:55 crc kubenswrapper[4756]: I1124 12:40:55.172294 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:56 crc kubenswrapper[4756]: I1124 12:40:56.139794 4756 generic.go:334] "Generic (PLEG): container finished" podID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerID="7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3" exitCode=0 Nov 24 12:40:56 crc kubenswrapper[4756]: I1124 12:40:56.139886 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerDied","Data":"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3"} Nov 24 12:40:57 crc kubenswrapper[4756]: I1124 12:40:57.148104 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerStarted","Data":"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c"} Nov 24 12:40:57 crc kubenswrapper[4756]: I1124 12:40:57.171143 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbqfv" podStartSLOduration=5.715426719 podStartE2EDuration="8.171127936s" podCreationTimestamp="2025-11-24 12:40:49 +0000 UTC" firstStartedPulling="2025-11-24 12:40:54.120138709 +0000 UTC m=+786.477652841" lastFinishedPulling="2025-11-24 12:40:56.575839886 +0000 UTC m=+788.933354058" observedRunningTime="2025-11-24 12:40:57.166593613 +0000 UTC m=+789.524107755" watchObservedRunningTime="2025-11-24 12:40:57.171127936 +0000 UTC m=+789.528642078" Nov 24 12:40:57 crc kubenswrapper[4756]: I1124 12:40:57.739340 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-666974d685-8lth8" Nov 24 12:40:58 crc kubenswrapper[4756]: I1124 12:40:58.304234 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:40:58 crc kubenswrapper[4756]: I1124 12:40:58.304574 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jn6jv" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="registry-server" containerID="cri-o://485793ec046b6776f96cbc23398e8283a3de1df826aafeadb8b93897484e9595" gracePeriod=2 Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.171773 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7b5579a-3484-446d-84be-2eb829811967" containerID="485793ec046b6776f96cbc23398e8283a3de1df826aafeadb8b93897484e9595" exitCode=0 Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.171877 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerDied","Data":"485793ec046b6776f96cbc23398e8283a3de1df826aafeadb8b93897484e9595"} Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.172110 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn6jv" event={"ID":"c7b5579a-3484-446d-84be-2eb829811967","Type":"ContainerDied","Data":"3080f8f65acc84d21ca7bf7e52ff81193bfb13fddb153a91e4fdcbaf02f07c61"} Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.172124 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3080f8f65acc84d21ca7bf7e52ff81193bfb13fddb153a91e4fdcbaf02f07c61" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.191114 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.290953 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpd7n\" (UniqueName: \"kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n\") pod \"c7b5579a-3484-446d-84be-2eb829811967\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.291070 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content\") pod \"c7b5579a-3484-446d-84be-2eb829811967\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.291211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities\") pod \"c7b5579a-3484-446d-84be-2eb829811967\" (UID: \"c7b5579a-3484-446d-84be-2eb829811967\") " Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.292124 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities" (OuterVolumeSpecName: "utilities") pod "c7b5579a-3484-446d-84be-2eb829811967" (UID: "c7b5579a-3484-446d-84be-2eb829811967"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.299997 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n" (OuterVolumeSpecName: "kube-api-access-dpd7n") pod "c7b5579a-3484-446d-84be-2eb829811967" (UID: "c7b5579a-3484-446d-84be-2eb829811967"). InnerVolumeSpecName "kube-api-access-dpd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.390098 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7b5579a-3484-446d-84be-2eb829811967" (UID: "c7b5579a-3484-446d-84be-2eb829811967"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.393298 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.393331 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpd7n\" (UniqueName: \"kubernetes.io/projected/c7b5579a-3484-446d-84be-2eb829811967-kube-api-access-dpd7n\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.393343 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5579a-3484-446d-84be-2eb829811967-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.836030 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.836364 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:40:59 crc kubenswrapper[4756]: I1124 12:40:59.880471 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:41:00 crc kubenswrapper[4756]: I1124 12:41:00.178572 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn6jv" Nov 24 12:41:00 crc kubenswrapper[4756]: I1124 12:41:00.208880 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:41:00 crc kubenswrapper[4756]: I1124 12:41:00.215064 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jn6jv"] Nov 24 12:41:00 crc kubenswrapper[4756]: I1124 12:41:00.484114 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b5579a-3484-446d-84be-2eb829811967" path="/var/lib/kubelet/pods/c7b5579a-3484-446d-84be-2eb829811967/volumes" Nov 24 12:41:03 crc kubenswrapper[4756]: I1124 12:41:03.479589 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:41:03 crc kubenswrapper[4756]: I1124 12:41:03.479877 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:41:03 crc kubenswrapper[4756]: I1124 12:41:03.479915 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:41:03 crc kubenswrapper[4756]: I1124 12:41:03.480294 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:41:03 crc kubenswrapper[4756]: I1124 12:41:03.480347 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb" gracePeriod=600 Nov 24 12:41:04 crc kubenswrapper[4756]: I1124 12:41:04.204986 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb" exitCode=0 Nov 24 12:41:04 crc kubenswrapper[4756]: I1124 12:41:04.205043 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb"} Nov 24 12:41:04 crc kubenswrapper[4756]: I1124 12:41:04.205084 4756 scope.go:117] "RemoveContainer" containerID="08cbdaf4c5a00dfec5d8d1553322ef80891c64bf60f6dc1ea376e947fc205e7b" Nov 24 12:41:05 crc kubenswrapper[4756]: I1124 12:41:05.213638 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9"} Nov 24 12:41:09 crc kubenswrapper[4756]: I1124 12:41:09.882532 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.314123 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.315109 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbqfv" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="registry-server" containerID="cri-o://c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c" gracePeriod=2 Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.718792 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.820637 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content\") pod \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.820755 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl9dk\" (UniqueName: \"kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk\") pod \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.820802 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities\") pod \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\" (UID: \"eaf4f082-e5cb-4eda-920f-600a30ab2eed\") " Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.822184 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities" (OuterVolumeSpecName: "utilities") pod "eaf4f082-e5cb-4eda-920f-600a30ab2eed" (UID: "eaf4f082-e5cb-4eda-920f-600a30ab2eed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.829369 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk" (OuterVolumeSpecName: "kube-api-access-zl9dk") pod "eaf4f082-e5cb-4eda-920f-600a30ab2eed" (UID: "eaf4f082-e5cb-4eda-920f-600a30ab2eed"). InnerVolumeSpecName "kube-api-access-zl9dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.838824 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaf4f082-e5cb-4eda-920f-600a30ab2eed" (UID: "eaf4f082-e5cb-4eda-920f-600a30ab2eed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.922631 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.922858 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl9dk\" (UniqueName: \"kubernetes.io/projected/eaf4f082-e5cb-4eda-920f-600a30ab2eed-kube-api-access-zl9dk\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:12 crc kubenswrapper[4756]: I1124 12:41:12.922869 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaf4f082-e5cb-4eda-920f-600a30ab2eed-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.306369 4756 generic.go:334] "Generic (PLEG): container finished" podID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerID="c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c" exitCode=0 Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.306554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerDied","Data":"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c"} Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.306920 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbqfv" event={"ID":"eaf4f082-e5cb-4eda-920f-600a30ab2eed","Type":"ContainerDied","Data":"67ce27ff90376b550bdd0fa96fdb5f0b8f81ea0737bfe2fe407ca869280f757f"} Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.306952 4756 scope.go:117] "RemoveContainer" containerID="c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.306638 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbqfv" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.330917 4756 scope.go:117] "RemoveContainer" containerID="7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.357064 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.363056 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbqfv"] Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.381606 4756 scope.go:117] "RemoveContainer" containerID="30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.401097 4756 scope.go:117] "RemoveContainer" containerID="c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c" Nov 24 12:41:13 crc kubenswrapper[4756]: E1124 12:41:13.401771 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c\": container with ID starting with c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c not found: ID does not exist" containerID="c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.401872 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c"} err="failed to get container status \"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c\": rpc error: code = NotFound desc = could not find container \"c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c\": container with ID starting with c62bc600d63068828c293bec5d5d54a81d4b2a9f825b2b8a6ee1282520458b2c not found: ID does not exist" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.401897 4756 scope.go:117] "RemoveContainer" containerID="7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3" Nov 24 12:41:13 crc kubenswrapper[4756]: E1124 12:41:13.402454 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3\": container with ID starting with 7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3 not found: ID does not exist" containerID="7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.402511 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3"} err="failed to get container status \"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3\": rpc error: code = NotFound desc = could not find container \"7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3\": container with ID starting with 7274f5d355c72ff5e245d28f2b0a96f26700e4c8057ca3458bff6bf0dc3d7cf3 not found: ID does not exist" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.402548 4756 scope.go:117] "RemoveContainer" containerID="30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b" Nov 24 12:41:13 crc kubenswrapper[4756]: E1124 12:41:13.402984 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b\": container with ID starting with 30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b not found: ID does not exist" containerID="30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b" Nov 24 12:41:13 crc kubenswrapper[4756]: I1124 12:41:13.403018 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b"} err="failed to get container status \"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b\": rpc error: code = NotFound desc = could not find container \"30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b\": container with ID starting with 30c716eb29d32a948135081deaef168597d7444fd17b041c1edbf051b00a7d4b not found: ID does not exist" Nov 24 12:41:14 crc kubenswrapper[4756]: I1124 12:41:14.485135 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" path="/var/lib/kubelet/pods/eaf4f082-e5cb-4eda-920f-600a30ab2eed/volumes" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.778892 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx"] Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780275 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="extract-content" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780298 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="extract-content" Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780319 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="extract-utilities" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780326 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="extract-utilities" Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780336 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="extract-utilities" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780344 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="extract-utilities" Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780351 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="extract-content" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780361 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="extract-content" Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780378 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780385 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: E1124 12:41:30.780397 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780404 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780590 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf4f082-e5cb-4eda-920f-600a30ab2eed" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.780613 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b5579a-3484-446d-84be-2eb829811967" containerName="registry-server" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.781722 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.784106 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mk9v4" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.787820 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.789486 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.791037 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dmw74" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.792928 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.809966 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.831553 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.832786 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.837516 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9vq7q" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.845014 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.868855 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh6qv\" (UniqueName: \"kubernetes.io/projected/0aa7a2bc-482f-4ed4-820d-331ea6d971c7-kube-api-access-qh6qv\") pod \"cinder-operator-controller-manager-79856dc55c-5srrx\" (UID: \"0aa7a2bc-482f-4ed4-820d-331ea6d971c7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.868945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hhf\" (UniqueName: \"kubernetes.io/projected/81563dca-2369-4349-9881-b2031df19de0-kube-api-access-54hhf\") pod \"barbican-operator-controller-manager-86dc4d89c8-s8jfx\" (UID: \"81563dca-2369-4349-9881-b2031df19de0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.869060 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.870246 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.874029 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mffrh" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.879621 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.880844 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.883012 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qpnkk" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.909375 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.912880 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.920829 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.921912 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.927304 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-69qq9" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.939632 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.940628 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.945238 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.945261 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dsz5b" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.968497 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.970411 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hhf\" (UniqueName: \"kubernetes.io/projected/81563dca-2369-4349-9881-b2031df19de0-kube-api-access-54hhf\") pod \"barbican-operator-controller-manager-86dc4d89c8-s8jfx\" (UID: \"81563dca-2369-4349-9881-b2031df19de0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.970465 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s59s\" (UniqueName: \"kubernetes.io/projected/e2224700-f8c7-4380-95c5-537e168c7e99-kube-api-access-8s59s\") pod \"heat-operator-controller-manager-774b86978c-p9zgm\" (UID: \"e2224700-f8c7-4380-95c5-537e168c7e99\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.970547 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh6qv\" (UniqueName: \"kubernetes.io/projected/0aa7a2bc-482f-4ed4-820d-331ea6d971c7-kube-api-access-qh6qv\") pod \"cinder-operator-controller-manager-79856dc55c-5srrx\" (UID: \"0aa7a2bc-482f-4ed4-820d-331ea6d971c7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.970578 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrsp\" (UniqueName: \"kubernetes.io/projected/3bc7fab7-280b-4964-a1f0-51f0b59438ed-kube-api-access-qrrsp\") pod \"designate-operator-controller-manager-7d695c9b56-djhp4\" (UID: \"3bc7fab7-280b-4964-a1f0-51f0b59438ed\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.970600 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8ss\" (UniqueName: \"kubernetes.io/projected/91ae544e-de6e-44e8-9119-eae33586fe56-kube-api-access-rp8ss\") pod \"glance-operator-controller-manager-68b95954c9-w22c4\" (UID: \"91ae544e-de6e-44e8-9119-eae33586fe56\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.987243 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5"] Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.988670 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.991552 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4v5zf" Nov 24 12:41:30 crc kubenswrapper[4756]: I1124 12:41:30.994183 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.012232 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.013959 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.018578 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zthmj" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.018791 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.019037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh6qv\" (UniqueName: \"kubernetes.io/projected/0aa7a2bc-482f-4ed4-820d-331ea6d971c7-kube-api-access-qh6qv\") pod \"cinder-operator-controller-manager-79856dc55c-5srrx\" (UID: \"0aa7a2bc-482f-4ed4-820d-331ea6d971c7\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.026381 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.030641 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hhf\" (UniqueName: \"kubernetes.io/projected/81563dca-2369-4349-9881-b2031df19de0-kube-api-access-54hhf\") pod \"barbican-operator-controller-manager-86dc4d89c8-s8jfx\" (UID: \"81563dca-2369-4349-9881-b2031df19de0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.043301 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.044679 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.051723 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g2rwb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.056410 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.063466 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.066507 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.068266 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-q6456" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072507 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrsp\" (UniqueName: \"kubernetes.io/projected/3bc7fab7-280b-4964-a1f0-51f0b59438ed-kube-api-access-qrrsp\") pod \"designate-operator-controller-manager-7d695c9b56-djhp4\" (UID: \"3bc7fab7-280b-4964-a1f0-51f0b59438ed\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072542 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8ss\" (UniqueName: \"kubernetes.io/projected/91ae544e-de6e-44e8-9119-eae33586fe56-kube-api-access-rp8ss\") pod \"glance-operator-controller-manager-68b95954c9-w22c4\" (UID: \"91ae544e-de6e-44e8-9119-eae33586fe56\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072590 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072624 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s59s\" (UniqueName: \"kubernetes.io/projected/e2224700-f8c7-4380-95c5-537e168c7e99-kube-api-access-8s59s\") pod \"heat-operator-controller-manager-774b86978c-p9zgm\" (UID: \"e2224700-f8c7-4380-95c5-537e168c7e99\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072642 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqb5\" (UniqueName: \"kubernetes.io/projected/4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e-kube-api-access-lbqb5\") pod \"ironic-operator-controller-manager-5bfcdc958c-kqtc5\" (UID: \"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072671 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxkv\" (UniqueName: \"kubernetes.io/projected/99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd-kube-api-access-dnxkv\") pod \"horizon-operator-controller-manager-68c9694994-sh48c\" (UID: \"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.072689 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlwb\" (UniqueName: \"kubernetes.io/projected/8d4269ad-a2ff-47be-bade-792bbf616cf2-kube-api-access-pdlwb\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.073146 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.078681 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.080123 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.084304 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6dkj4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.084927 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.085979 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.089049 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bpzhs" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.121698 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.125478 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.128282 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.169470 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8ss\" (UniqueName: \"kubernetes.io/projected/91ae544e-de6e-44e8-9119-eae33586fe56-kube-api-access-rp8ss\") pod \"glance-operator-controller-manager-68b95954c9-w22c4\" (UID: \"91ae544e-de6e-44e8-9119-eae33586fe56\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.170098 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s59s\" (UniqueName: \"kubernetes.io/projected/e2224700-f8c7-4380-95c5-537e168c7e99-kube-api-access-8s59s\") pod \"heat-operator-controller-manager-774b86978c-p9zgm\" (UID: \"e2224700-f8c7-4380-95c5-537e168c7e99\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173481 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdlwb\" (UniqueName: \"kubernetes.io/projected/8d4269ad-a2ff-47be-bade-792bbf616cf2-kube-api-access-pdlwb\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173539 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjmr\" (UniqueName: \"kubernetes.io/projected/991621b1-366e-4d35-b1b7-6380e506ea08-kube-api-access-bhjmr\") pod \"manila-operator-controller-manager-58bb8d67cc-9q9zc\" (UID: \"991621b1-366e-4d35-b1b7-6380e506ea08\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsflv\" (UniqueName: \"kubernetes.io/projected/8465956b-6245-447e-adcd-7ba8367ca117-kube-api-access-tsflv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-vwvff\" (UID: \"8465956b-6245-447e-adcd-7ba8367ca117\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173618 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173650 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgjw\" (UniqueName: \"kubernetes.io/projected/274dfe9d-6821-481f-a605-bf8fbf101f89-kube-api-access-dhgjw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-jgtkf\" (UID: \"274dfe9d-6821-481f-a605-bf8fbf101f89\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173671 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqb5\" (UniqueName: \"kubernetes.io/projected/4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e-kube-api-access-lbqb5\") pod \"ironic-operator-controller-manager-5bfcdc958c-kqtc5\" (UID: \"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173693 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgj8\" (UniqueName: \"kubernetes.io/projected/bfc7de9e-743d-4492-979c-7043fb8b41d1-kube-api-access-4dgj8\") pod \"nova-operator-controller-manager-79556f57fc-xjntp\" (UID: \"bfc7de9e-743d-4492-979c-7043fb8b41d1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173716 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqhq7\" (UniqueName: \"kubernetes.io/projected/09cf908e-b30f-47ea-a4d1-2e50a192289f-kube-api-access-rqhq7\") pod \"keystone-operator-controller-manager-748dc6576f-s2rwz\" (UID: \"09cf908e-b30f-47ea-a4d1-2e50a192289f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.173743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxkv\" (UniqueName: \"kubernetes.io/projected/99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd-kube-api-access-dnxkv\") pod \"horizon-operator-controller-manager-68c9694994-sh48c\" (UID: \"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.174254 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.174299 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert podName:8d4269ad-a2ff-47be-bade-792bbf616cf2 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:31.674282446 +0000 UTC m=+824.031796588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert") pod "infra-operator-controller-manager-d5cc86f4b-hbs6w" (UID: "8d4269ad-a2ff-47be-bade-792bbf616cf2") : secret "infra-operator-webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.177088 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrsp\" (UniqueName: \"kubernetes.io/projected/3bc7fab7-280b-4964-a1f0-51f0b59438ed-kube-api-access-qrrsp\") pod \"designate-operator-controller-manager-7d695c9b56-djhp4\" (UID: \"3bc7fab7-280b-4964-a1f0-51f0b59438ed\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.187092 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.202659 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdlwb\" (UniqueName: \"kubernetes.io/projected/8d4269ad-a2ff-47be-bade-792bbf616cf2-kube-api-access-pdlwb\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.203849 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqb5\" (UniqueName: \"kubernetes.io/projected/4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e-kube-api-access-lbqb5\") pod \"ironic-operator-controller-manager-5bfcdc958c-kqtc5\" (UID: \"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.209121 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxkv\" (UniqueName: \"kubernetes.io/projected/99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd-kube-api-access-dnxkv\") pod \"horizon-operator-controller-manager-68c9694994-sh48c\" (UID: \"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.201479 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.216576 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.222582 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.231869 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.244529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.247959 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-pbl8w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.274972 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqhq7\" (UniqueName: \"kubernetes.io/projected/09cf908e-b30f-47ea-a4d1-2e50a192289f-kube-api-access-rqhq7\") pod \"keystone-operator-controller-manager-748dc6576f-s2rwz\" (UID: \"09cf908e-b30f-47ea-a4d1-2e50a192289f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.275035 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjmr\" (UniqueName: \"kubernetes.io/projected/991621b1-366e-4d35-b1b7-6380e506ea08-kube-api-access-bhjmr\") pod \"manila-operator-controller-manager-58bb8d67cc-9q9zc\" (UID: \"991621b1-366e-4d35-b1b7-6380e506ea08\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.275076 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsflv\" (UniqueName: \"kubernetes.io/projected/8465956b-6245-447e-adcd-7ba8367ca117-kube-api-access-tsflv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-vwvff\" (UID: \"8465956b-6245-447e-adcd-7ba8367ca117\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.275133 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgjw\" (UniqueName: \"kubernetes.io/projected/274dfe9d-6821-481f-a605-bf8fbf101f89-kube-api-access-dhgjw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-jgtkf\" (UID: \"274dfe9d-6821-481f-a605-bf8fbf101f89\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.275196 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgj8\" (UniqueName: \"kubernetes.io/projected/bfc7de9e-743d-4492-979c-7043fb8b41d1-kube-api-access-4dgj8\") pod \"nova-operator-controller-manager-79556f57fc-xjntp\" (UID: \"bfc7de9e-743d-4492-979c-7043fb8b41d1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.283088 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.299906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgjw\" (UniqueName: \"kubernetes.io/projected/274dfe9d-6821-481f-a605-bf8fbf101f89-kube-api-access-dhgjw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-jgtkf\" (UID: \"274dfe9d-6821-481f-a605-bf8fbf101f89\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.303034 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjmr\" (UniqueName: \"kubernetes.io/projected/991621b1-366e-4d35-b1b7-6380e506ea08-kube-api-access-bhjmr\") pod \"manila-operator-controller-manager-58bb8d67cc-9q9zc\" (UID: \"991621b1-366e-4d35-b1b7-6380e506ea08\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.307686 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.308112 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgj8\" (UniqueName: \"kubernetes.io/projected/bfc7de9e-743d-4492-979c-7043fb8b41d1-kube-api-access-4dgj8\") pod \"nova-operator-controller-manager-79556f57fc-xjntp\" (UID: \"bfc7de9e-743d-4492-979c-7043fb8b41d1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.309128 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqhq7\" (UniqueName: \"kubernetes.io/projected/09cf908e-b30f-47ea-a4d1-2e50a192289f-kube-api-access-rqhq7\") pod \"keystone-operator-controller-manager-748dc6576f-s2rwz\" (UID: \"09cf908e-b30f-47ea-a4d1-2e50a192289f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.318221 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsflv\" (UniqueName: \"kubernetes.io/projected/8465956b-6245-447e-adcd-7ba8367ca117-kube-api-access-tsflv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-vwvff\" (UID: \"8465956b-6245-447e-adcd-7ba8367ca117\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.318509 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.320517 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.325993 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tbgk6" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.332379 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.335481 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.338418 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.345800 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2l5hb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.345976 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.347774 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.354394 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.354729 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5tgqh" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.368143 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.376576 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjv56\" (UniqueName: \"kubernetes.io/projected/f874e9c8-d248-46c4-a1f2-8912827db14f-kube-api-access-jjv56\") pod \"octavia-operator-controller-manager-fd75fd47d-m4r6n\" (UID: \"f874e9c8-d248-46c4-a1f2-8912827db14f\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.379929 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.381576 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.387043 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.397102 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.398318 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kd8rw" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.422307 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.433604 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.434918 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.441105 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x74hx" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.458263 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.463951 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.472729 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479520 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q57s\" (UniqueName: \"kubernetes.io/projected/46c6804b-e74a-42d0-bc4e-2ffa7a5fa491-kube-api-access-4q57s\") pod \"ovn-operator-controller-manager-66cf5c67ff-l9fh4\" (UID: \"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479593 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479630 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjv56\" (UniqueName: \"kubernetes.io/projected/f874e9c8-d248-46c4-a1f2-8912827db14f-kube-api-access-jjv56\") pod \"octavia-operator-controller-manager-fd75fd47d-m4r6n\" (UID: \"f874e9c8-d248-46c4-a1f2-8912827db14f\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479689 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvkr\" (UniqueName: \"kubernetes.io/projected/4c0ece30-ae1b-4706-861c-2ee51f7332d7-kube-api-access-wgvkr\") pod \"swift-operator-controller-manager-6fdc4fcf86-rb4l5\" (UID: \"4c0ece30-ae1b-4706-861c-2ee51f7332d7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsn7\" (UniqueName: \"kubernetes.io/projected/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-kube-api-access-cdsn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.479724 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8l8r\" (UniqueName: \"kubernetes.io/projected/248f663c-2ddc-487f-a33c-9d7b9bad23be-kube-api-access-f8l8r\") pod \"placement-operator-controller-manager-5db546f9d9-dkjtg\" (UID: \"248f663c-2ddc-487f-a33c-9d7b9bad23be\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.486492 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.497921 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.498748 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.499369 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.508889 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9xk76" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.514001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjv56\" (UniqueName: \"kubernetes.io/projected/f874e9c8-d248-46c4-a1f2-8912827db14f-kube-api-access-jjv56\") pod \"octavia-operator-controller-manager-fd75fd47d-m4r6n\" (UID: \"f874e9c8-d248-46c4-a1f2-8912827db14f\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.528234 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.529439 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.533771 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pfprq" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.551564 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.567720 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.578281 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.579250 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.580795 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.580991 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9f89\" (UniqueName: \"kubernetes.io/projected/7480249a-d35a-4768-b5cc-daebd6f82c9b-kube-api-access-n9f89\") pod \"telemetry-operator-controller-manager-567f98c9d-spm8x\" (UID: \"7480249a-d35a-4768-b5cc-daebd6f82c9b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581071 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvkr\" (UniqueName: \"kubernetes.io/projected/4c0ece30-ae1b-4706-861c-2ee51f7332d7-kube-api-access-wgvkr\") pod \"swift-operator-controller-manager-6fdc4fcf86-rb4l5\" (UID: \"4c0ece30-ae1b-4706-861c-2ee51f7332d7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsn7\" (UniqueName: \"kubernetes.io/projected/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-kube-api-access-cdsn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581135 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8l8r\" (UniqueName: \"kubernetes.io/projected/248f663c-2ddc-487f-a33c-9d7b9bad23be-kube-api-access-f8l8r\") pod \"placement-operator-controller-manager-5db546f9d9-dkjtg\" (UID: \"248f663c-2ddc-487f-a33c-9d7b9bad23be\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581006 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q57s\" (UniqueName: \"kubernetes.io/projected/46c6804b-e74a-42d0-bc4e-2ffa7a5fa491-kube-api-access-4q57s\") pod \"ovn-operator-controller-manager-66cf5c67ff-l9fh4\" (UID: \"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581345 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.581052 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6l2mb" Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.581720 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.581776 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert podName:94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f nodeName:}" failed. No retries permitted until 2025-11-24 12:41:32.081759539 +0000 UTC m=+824.439273681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" (UID: "94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.585486 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.606701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsn7\" (UniqueName: \"kubernetes.io/projected/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-kube-api-access-cdsn7\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.607958 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvkr\" (UniqueName: \"kubernetes.io/projected/4c0ece30-ae1b-4706-861c-2ee51f7332d7-kube-api-access-wgvkr\") pod \"swift-operator-controller-manager-6fdc4fcf86-rb4l5\" (UID: \"4c0ece30-ae1b-4706-861c-2ee51f7332d7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.608344 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.615987 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q57s\" (UniqueName: \"kubernetes.io/projected/46c6804b-e74a-42d0-bc4e-2ffa7a5fa491-kube-api-access-4q57s\") pod \"ovn-operator-controller-manager-66cf5c67ff-l9fh4\" (UID: \"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.633554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8l8r\" (UniqueName: \"kubernetes.io/projected/248f663c-2ddc-487f-a33c-9d7b9bad23be-kube-api-access-f8l8r\") pod \"placement-operator-controller-manager-5db546f9d9-dkjtg\" (UID: \"248f663c-2ddc-487f-a33c-9d7b9bad23be\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.646413 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.656877 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683174 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683269 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9f89\" (UniqueName: \"kubernetes.io/projected/7480249a-d35a-4768-b5cc-daebd6f82c9b-kube-api-access-n9f89\") pod \"telemetry-operator-controller-manager-567f98c9d-spm8x\" (UID: \"7480249a-d35a-4768-b5cc-daebd6f82c9b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683301 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpv2z\" (UniqueName: \"kubernetes.io/projected/fc8be713-d12e-4289-adbd-a3aee9ebf603-kube-api-access-jpv2z\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683342 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfn5\" (UniqueName: \"kubernetes.io/projected/2f99acab-4016-43dc-ab21-6d0c920def14-kube-api-access-scfn5\") pod \"test-operator-controller-manager-5cb74df96-5mpgm\" (UID: \"2f99acab-4016-43dc-ab21-6d0c920def14\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683379 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.683444 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9ff\" (UniqueName: \"kubernetes.io/projected/eb1f334a-14e2-4f63-8168-e5db902d8e70-kube-api-access-6l9ff\") pod \"watcher-operator-controller-manager-7445f8dd59-46b2l\" (UID: \"eb1f334a-14e2-4f63-8168-e5db902d8e70\") " pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.697597 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.701412 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.702797 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4269ad-a2ff-47be-bade-792bbf616cf2-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-hbs6w\" (UID: \"8d4269ad-a2ff-47be-bade-792bbf616cf2\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.707495 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.724117 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9f89\" (UniqueName: \"kubernetes.io/projected/7480249a-d35a-4768-b5cc-daebd6f82c9b-kube-api-access-n9f89\") pod \"telemetry-operator-controller-manager-567f98c9d-spm8x\" (UID: \"7480249a-d35a-4768-b5cc-daebd6f82c9b\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.746696 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.752662 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gnqfp" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.766930 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.786388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpv2z\" (UniqueName: \"kubernetes.io/projected/fc8be713-d12e-4289-adbd-a3aee9ebf603-kube-api-access-jpv2z\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.786521 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfn5\" (UniqueName: \"kubernetes.io/projected/2f99acab-4016-43dc-ab21-6d0c920def14-kube-api-access-scfn5\") pod \"test-operator-controller-manager-5cb74df96-5mpgm\" (UID: \"2f99acab-4016-43dc-ab21-6d0c920def14\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.786619 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.786780 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l9ff\" (UniqueName: \"kubernetes.io/projected/eb1f334a-14e2-4f63-8168-e5db902d8e70-kube-api-access-6l9ff\") pod \"watcher-operator-controller-manager-7445f8dd59-46b2l\" (UID: \"eb1f334a-14e2-4f63-8168-e5db902d8e70\") " pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.788254 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.788307 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:32.288291462 +0000 UTC m=+824.645805604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "metrics-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.792822 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.793118 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: E1124 12:41:31.796642 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:32.296592622 +0000 UTC m=+824.654106834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "webhook-server-cert" not found Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.814213 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpv2z\" (UniqueName: \"kubernetes.io/projected/fc8be713-d12e-4289-adbd-a3aee9ebf603-kube-api-access-jpv2z\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.850619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfn5\" (UniqueName: \"kubernetes.io/projected/2f99acab-4016-43dc-ab21-6d0c920def14-kube-api-access-scfn5\") pod \"test-operator-controller-manager-5cb74df96-5mpgm\" (UID: \"2f99acab-4016-43dc-ab21-6d0c920def14\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.860590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l9ff\" (UniqueName: \"kubernetes.io/projected/eb1f334a-14e2-4f63-8168-e5db902d8e70-kube-api-access-6l9ff\") pod \"watcher-operator-controller-manager-7445f8dd59-46b2l\" (UID: \"eb1f334a-14e2-4f63-8168-e5db902d8e70\") " pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.879414 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.894586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7pb\" (UniqueName: \"kubernetes.io/projected/0624f295-ba46-4e28-9f0f-356cfbe6ecbc-kube-api-access-5r7pb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kvg52\" (UID: \"0624f295-ba46-4e28-9f0f-356cfbe6ecbc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.935919 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.954613 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx"] Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.987702 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:31 crc kubenswrapper[4756]: I1124 12:41:31.996556 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7pb\" (UniqueName: \"kubernetes.io/projected/0624f295-ba46-4e28-9f0f-356cfbe6ecbc-kube-api-access-5r7pb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kvg52\" (UID: \"0624f295-ba46-4e28-9f0f-356cfbe6ecbc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.026352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7pb\" (UniqueName: \"kubernetes.io/projected/0624f295-ba46-4e28-9f0f-356cfbe6ecbc-kube-api-access-5r7pb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kvg52\" (UID: \"0624f295-ba46-4e28-9f0f-356cfbe6ecbc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.050035 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.097802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.098942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.099116 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.099188 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert podName:94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f nodeName:}" failed. No retries permitted until 2025-11-24 12:41:33.099168832 +0000 UTC m=+825.456682974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" (UID: "94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.303739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.303910 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.303991 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:33.303967678 +0000 UTC m=+825.661481820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "webhook-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.304017 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: E1124 12:41:32.304070 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:33.3040559 +0000 UTC m=+825.661570042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "metrics-server-cert" not found Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.303919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.429858 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm"] Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.456683 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.471624 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx"] Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.472863 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" event={"ID":"0aa7a2bc-482f-4ed4-820d-331ea6d971c7","Type":"ContainerStarted","Data":"7a51acf783d2980c04cf384eed04512a6655af2f5527497e91675e09b97b3b6f"} Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.489653 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" event={"ID":"e2224700-f8c7-4380-95c5-537e168c7e99","Type":"ContainerStarted","Data":"e65956469c08a072b95decc13c957f010799f4655d1a8236c2e27c715fbe8d27"} Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.489688 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4"] Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.489704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" event={"ID":"81563dca-2369-4349-9881-b2031df19de0","Type":"ContainerStarted","Data":"731a91b627e821a3806d1438f05463272acf5d04b6ffdc9d7bc6f3a32114ce2b"} Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.704736 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4"] Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.715267 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c"] Nov 24 12:41:32 crc kubenswrapper[4756]: W1124 12:41:32.720990 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99dfd9fb_d4ea_4f2a_bbd5_670d1a75c7fd.slice/crio-bd5b47c54adc25a66fc7bff6964ac08e2cf2f844d7ed34f37289d14b3d08e2ff WatchSource:0}: Error finding container bd5b47c54adc25a66fc7bff6964ac08e2cf2f844d7ed34f37289d14b3d08e2ff: Status 404 returned error can't find the container with id bd5b47c54adc25a66fc7bff6964ac08e2cf2f844d7ed34f37289d14b3d08e2ff Nov 24 12:41:32 crc kubenswrapper[4756]: W1124 12:41:32.728895 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod991621b1_366e_4d35_b1b7_6380e506ea08.slice/crio-e47b77abb9708e8814f682f39ee9c47a40b68f62968cac2392b8a59b5af44c37 WatchSource:0}: Error finding container e47b77abb9708e8814f682f39ee9c47a40b68f62968cac2392b8a59b5af44c37: Status 404 returned error can't find the container with id e47b77abb9708e8814f682f39ee9c47a40b68f62968cac2392b8a59b5af44c37 Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.732810 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc"] Nov 24 12:41:32 crc kubenswrapper[4756]: W1124 12:41:32.739412 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09cf908e_b30f_47ea_a4d1_2e50a192289f.slice/crio-9f7fe2d8e5d6bf86b9ad0eb6cf3f41dca47857a15ba4e1f8cf0a102e6e5ca48c WatchSource:0}: Error finding container 9f7fe2d8e5d6bf86b9ad0eb6cf3f41dca47857a15ba4e1f8cf0a102e6e5ca48c: Status 404 returned error can't find the container with id 9f7fe2d8e5d6bf86b9ad0eb6cf3f41dca47857a15ba4e1f8cf0a102e6e5ca48c Nov 24 12:41:32 crc kubenswrapper[4756]: I1124 12:41:32.741367 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.063422 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.071039 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.096138 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.108106 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.118033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.118236 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.118311 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert podName:94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f nodeName:}" failed. No retries permitted until 2025-11-24 12:41:35.118289345 +0000 UTC m=+827.475803487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" (UID: "94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.119750 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.127848 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.149394 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5"] Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.159126 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm"] Nov 24 12:41:33 crc kubenswrapper[4756]: W1124 12:41:33.159520 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f99acab_4016_43dc_ab21_6d0c920def14.slice/crio-9f097f2ee20aa2353e89788e007d8066918c732281c71d07ac0d68a079bc5dde WatchSource:0}: Error finding container 9f097f2ee20aa2353e89788e007d8066918c732281c71d07ac0d68a079bc5dde: Status 404 returned error can't find the container with id 9f097f2ee20aa2353e89788e007d8066918c732281c71d07ac0d68a079bc5dde Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.166210 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l"] Nov 24 12:41:33 crc kubenswrapper[4756]: W1124 12:41:33.166725 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb1f334a_14e2_4f63_8168_e5db902d8e70.slice/crio-67d129bf80abdfb069baefe439ff006496d76c0d3b0027da905af801c34e0829 WatchSource:0}: Error finding container 67d129bf80abdfb069baefe439ff006496d76c0d3b0027da905af801c34e0829: Status 404 returned error can't find the container with id 67d129bf80abdfb069baefe439ff006496d76c0d3b0027da905af801c34e0829 Nov 24 12:41:33 crc kubenswrapper[4756]: W1124 12:41:33.167006 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274dfe9d_6821_481f_a605_bf8fbf101f89.slice/crio-ef180427c8b6b20dee4507d9b80f51c3ff4043374888b536b6cac6e50133a77f WatchSource:0}: Error finding container ef180427c8b6b20dee4507d9b80f51c3ff4043374888b536b6cac6e50133a77f: Status 404 returned error can't find the container with id ef180427c8b6b20dee4507d9b80f51c3ff4043374888b536b6cac6e50133a77f Nov 24 12:41:33 crc kubenswrapper[4756]: W1124 12:41:33.171341 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c6804b_e74a_42d0_bc4e_2ffa7a5fa491.slice/crio-7f30ff6421733206b5ed5e72c11b11b0bc2ee94e39fa7ca4eb2e6624b07f95fa WatchSource:0}: Error finding container 7f30ff6421733206b5ed5e72c11b11b0bc2ee94e39fa7ca4eb2e6624b07f95fa: Status 404 returned error can't find the container with id 7f30ff6421733206b5ed5e72c11b11b0bc2ee94e39fa7ca4eb2e6624b07f95fa Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.172646 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.217:5001/openstack-k8s-operators/watcher-operator:289e5536c6ac7bc28f20a464bb9c491d05ddd185,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7445f8dd59-46b2l_openstack-operators(eb1f334a-14e2-4f63-8168-e5db902d8e70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.173350 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhgjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_openstack-operators(274dfe9d-6821-481f-a605-bf8fbf101f89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: W1124 12:41:33.174919 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod248f663c_2ddc_487f_a33c_9d7b9bad23be.slice/crio-c1720f161ba6336864f5ffd185b810adc5f0a5e4cf4e3498c264bf6bdc331275 WatchSource:0}: Error finding container c1720f161ba6336864f5ffd185b810adc5f0a5e4cf4e3498c264bf6bdc331275: Status 404 returned error can't find the container with id c1720f161ba6336864f5ffd185b810adc5f0a5e4cf4e3498c264bf6bdc331275 Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.174900 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-l9fh4_openstack-operators(46c6804b-e74a-42d0-bc4e-2ffa7a5fa491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.175005 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6l9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7445f8dd59-46b2l_openstack-operators(eb1f334a-14e2-4f63-8168-e5db902d8e70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.175645 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhgjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_openstack-operators(274dfe9d-6821-481f-a605-bf8fbf101f89): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.176111 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" podUID="eb1f334a-14e2-4f63-8168-e5db902d8e70" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.177057 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg"] Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.177117 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" podUID="274dfe9d-6821-481f-a605-bf8fbf101f89" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.177960 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-dkjtg_openstack-operators(248f663c-2ddc-487f-a33c-9d7b9bad23be): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.178080 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q57s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-l9fh4_openstack-operators(46c6804b-e74a-42d0-bc4e-2ffa7a5fa491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.179238 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" podUID="46c6804b-e74a-42d0-bc4e-2ffa7a5fa491" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.183994 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x"] Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.184227 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8l8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-dkjtg_openstack-operators(248f663c-2ddc-487f-a33c-9d7b9bad23be): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.187232 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" podUID="248f663c-2ddc-487f-a33c-9d7b9bad23be" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.189229 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r7pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kvg52_openstack-operators(0624f295-ba46-4e28-9f0f-356cfbe6ecbc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.189301 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52"] Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.190458 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" podUID="0624f295-ba46-4e28-9f0f-356cfbe6ecbc" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.194897 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5"] Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.199271 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-rb4l5_openstack-operators(4c0ece30-ae1b-4706-861c-2ee51f7332d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.202080 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9f89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-spm8x_openstack-operators(7480249a-d35a-4768-b5cc-daebd6f82c9b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.202974 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgvkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-rb4l5_openstack-operators(4c0ece30-ae1b-4706-861c-2ee51f7332d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.203663 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9f89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-spm8x_openstack-operators(7480249a-d35a-4768-b5cc-daebd6f82c9b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.204232 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" podUID="4c0ece30-ae1b-4706-861c-2ee51f7332d7" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.204765 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" podUID="7480249a-d35a-4768-b5cc-daebd6f82c9b" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.322909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.323064 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.323235 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:35.323219924 +0000 UTC m=+827.680734066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "webhook-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.323252 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.323337 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.323359 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs podName:fc8be713-d12e-4289-adbd-a3aee9ebf603 nodeName:}" failed. No retries permitted until 2025-11-24 12:41:35.323352328 +0000 UTC m=+827.680866470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs") pod "openstack-operator-controller-manager-57bd844978-vgphd" (UID: "fc8be713-d12e-4289-adbd-a3aee9ebf603") : secret "metrics-server-cert" not found Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.500055 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" event={"ID":"7480249a-d35a-4768-b5cc-daebd6f82c9b","Type":"ContainerStarted","Data":"dc70491433dbb751a3dc929745e0964f329bb37dfe47ef3fc8cf2fb700b8fdd6"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.504331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" event={"ID":"3bc7fab7-280b-4964-a1f0-51f0b59438ed","Type":"ContainerStarted","Data":"d4ca51ad2e41bf1401cc2bf85994b71b52305906a551f670207348f231859525"} Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.505550 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" podUID="7480249a-d35a-4768-b5cc-daebd6f82c9b" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.506790 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" event={"ID":"91ae544e-de6e-44e8-9119-eae33586fe56","Type":"ContainerStarted","Data":"3a2c2206ec1c3ca9fd4fa2e59bb66d24026a8c418dbcac014b90ac10d6165f6c"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.510110 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" event={"ID":"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e","Type":"ContainerStarted","Data":"ca170e750f934f6af1d08e799db247a0b8a33b1c79432fde9de692b9f017086c"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.513615 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" event={"ID":"4c0ece30-ae1b-4706-861c-2ee51f7332d7","Type":"ContainerStarted","Data":"eb51c1e45d550bb2d3438da22488441f92f6d8f97125f5826f3145a014ecef2e"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.516808 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" event={"ID":"8465956b-6245-447e-adcd-7ba8367ca117","Type":"ContainerStarted","Data":"94d699b02c0771967057a5436a22a86febf3c1029d43772080a2d862409a330a"} Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.518486 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" podUID="4c0ece30-ae1b-4706-861c-2ee51f7332d7" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.519146 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" event={"ID":"274dfe9d-6821-481f-a605-bf8fbf101f89","Type":"ContainerStarted","Data":"ef180427c8b6b20dee4507d9b80f51c3ff4043374888b536b6cac6e50133a77f"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.521007 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" event={"ID":"0624f295-ba46-4e28-9f0f-356cfbe6ecbc","Type":"ContainerStarted","Data":"e88e92755cf07b1ef35e7223d51514ea4c06fc743e775ff40c225e2e3d09e660"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.523492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" event={"ID":"f874e9c8-d248-46c4-a1f2-8912827db14f","Type":"ContainerStarted","Data":"5482ace53541f5d918c2b9593fe5959570b426af11799a6fd3288581162455e7"} Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.523921 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" podUID="0624f295-ba46-4e28-9f0f-356cfbe6ecbc" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.523984 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" podUID="274dfe9d-6821-481f-a605-bf8fbf101f89" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.525280 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" event={"ID":"eb1f334a-14e2-4f63-8168-e5db902d8e70","Type":"ContainerStarted","Data":"67d129bf80abdfb069baefe439ff006496d76c0d3b0027da905af801c34e0829"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.527331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" event={"ID":"248f663c-2ddc-487f-a33c-9d7b9bad23be","Type":"ContainerStarted","Data":"c1720f161ba6336864f5ffd185b810adc5f0a5e4cf4e3498c264bf6bdc331275"} Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.527657 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.217:5001/openstack-k8s-operators/watcher-operator:289e5536c6ac7bc28f20a464bb9c491d05ddd185\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" podUID="eb1f334a-14e2-4f63-8168-e5db902d8e70" Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.528833 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" podUID="248f663c-2ddc-487f-a33c-9d7b9bad23be" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.529292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" event={"ID":"09cf908e-b30f-47ea-a4d1-2e50a192289f","Type":"ContainerStarted","Data":"9f7fe2d8e5d6bf86b9ad0eb6cf3f41dca47857a15ba4e1f8cf0a102e6e5ca48c"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.530701 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" event={"ID":"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491","Type":"ContainerStarted","Data":"7f30ff6421733206b5ed5e72c11b11b0bc2ee94e39fa7ca4eb2e6624b07f95fa"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.539364 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" event={"ID":"991621b1-366e-4d35-b1b7-6380e506ea08","Type":"ContainerStarted","Data":"e47b77abb9708e8814f682f39ee9c47a40b68f62968cac2392b8a59b5af44c37"} Nov 24 12:41:33 crc kubenswrapper[4756]: E1124 12:41:33.551815 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" podUID="46c6804b-e74a-42d0-bc4e-2ffa7a5fa491" Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.552705 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" event={"ID":"bfc7de9e-743d-4492-979c-7043fb8b41d1","Type":"ContainerStarted","Data":"e61d10566f9a353cb7cf67b9ce543a986087b837fffb4216767f41cc4945d551"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.560444 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" event={"ID":"2f99acab-4016-43dc-ab21-6d0c920def14","Type":"ContainerStarted","Data":"9f097f2ee20aa2353e89788e007d8066918c732281c71d07ac0d68a079bc5dde"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.562320 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" event={"ID":"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd","Type":"ContainerStarted","Data":"bd5b47c54adc25a66fc7bff6964ac08e2cf2f844d7ed34f37289d14b3d08e2ff"} Nov 24 12:41:33 crc kubenswrapper[4756]: I1124 12:41:33.564204 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" event={"ID":"8d4269ad-a2ff-47be-bade-792bbf616cf2","Type":"ContainerStarted","Data":"536c68eef142b9ea893106077fb35beff56a2e9b1629d88ec2709da575f47781"} Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.576650 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" podUID="0624f295-ba46-4e28-9f0f-356cfbe6ecbc" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.577907 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" podUID="248f663c-2ddc-487f-a33c-9d7b9bad23be" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.578300 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" podUID="274dfe9d-6821-481f-a605-bf8fbf101f89" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.578484 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" podUID="46c6804b-e74a-42d0-bc4e-2ffa7a5fa491" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.578615 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" podUID="4c0ece30-ae1b-4706-861c-2ee51f7332d7" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.578715 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.217:5001/openstack-k8s-operators/watcher-operator:289e5536c6ac7bc28f20a464bb9c491d05ddd185\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" podUID="eb1f334a-14e2-4f63-8168-e5db902d8e70" Nov 24 12:41:34 crc kubenswrapper[4756]: E1124 12:41:34.578750 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" podUID="7480249a-d35a-4768-b5cc-daebd6f82c9b" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.153833 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.159939 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb\" (UID: \"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.276865 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.356066 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.356461 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.361019 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-webhook-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.361140 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc8be713-d12e-4289-adbd-a3aee9ebf603-metrics-certs\") pod \"openstack-operator-controller-manager-57bd844978-vgphd\" (UID: \"fc8be713-d12e-4289-adbd-a3aee9ebf603\") " pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:35 crc kubenswrapper[4756]: I1124 12:41:35.368832 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:40 crc kubenswrapper[4756]: I1124 12:41:40.633215 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" event={"ID":"0aa7a2bc-482f-4ed4-820d-331ea6d971c7","Type":"ContainerStarted","Data":"d54827e28097855cc5dc911eebac82c2a4e0c2cf9b2cf88b3bd2bde2471710c7"} Nov 24 12:41:40 crc kubenswrapper[4756]: I1124 12:41:40.809352 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd"] Nov 24 12:41:40 crc kubenswrapper[4756]: I1124 12:41:40.850948 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb"] Nov 24 12:41:40 crc kubenswrapper[4756]: W1124 12:41:40.856803 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8be713_d12e_4289_adbd_a3aee9ebf603.slice/crio-86a15ed2fcae36cc10af4295550ea7620f03158656143a8e78d0225d77366cdd WatchSource:0}: Error finding container 86a15ed2fcae36cc10af4295550ea7620f03158656143a8e78d0225d77366cdd: Status 404 returned error can't find the container with id 86a15ed2fcae36cc10af4295550ea7620f03158656143a8e78d0225d77366cdd Nov 24 12:41:40 crc kubenswrapper[4756]: W1124 12:41:40.867173 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f6fb8a_7d3b_4e68_bfe8_c62edc439c4f.slice/crio-59a5e2fc769cd2fc7f3a683bcba8e6ee87afa5257c8d570b9990466ddee042d6 WatchSource:0}: Error finding container 59a5e2fc769cd2fc7f3a683bcba8e6ee87afa5257c8d570b9990466ddee042d6: Status 404 returned error can't find the container with id 59a5e2fc769cd2fc7f3a683bcba8e6ee87afa5257c8d570b9990466ddee042d6 Nov 24 12:41:40 crc kubenswrapper[4756]: E1124 12:41:40.999590 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dgj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-xjntp_openstack-operators(bfc7de9e-743d-4492-979c-7043fb8b41d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.000708 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" podUID="bfc7de9e-743d-4492-979c-7043fb8b41d1" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.004547 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbqb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-kqtc5_openstack-operators(4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.006033 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" podUID="4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.023272 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsflv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-vwvff_openstack-operators(8465956b-6245-447e-adcd-7ba8367ca117): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.024561 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" podUID="8465956b-6245-447e-adcd-7ba8367ca117" Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.650446 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" event={"ID":"81563dca-2369-4349-9881-b2031df19de0","Type":"ContainerStarted","Data":"16d29d59d57d9d820e88287190462763e505d692420d1bf83bbd7b5ea4d24128"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.655557 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" event={"ID":"09cf908e-b30f-47ea-a4d1-2e50a192289f","Type":"ContainerStarted","Data":"e5560c421180131121584ebd48742ca34e54969b3b8c50fd2f68bde500eb46bb"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.657575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" event={"ID":"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f","Type":"ContainerStarted","Data":"59a5e2fc769cd2fc7f3a683bcba8e6ee87afa5257c8d570b9990466ddee042d6"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.660610 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" event={"ID":"f874e9c8-d248-46c4-a1f2-8912827db14f","Type":"ContainerStarted","Data":"9e9066e7f5edfda1db7d5151730ae0df8e35cf2a51bc7035d07e94519a26c190"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.662619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" event={"ID":"e2224700-f8c7-4380-95c5-537e168c7e99","Type":"ContainerStarted","Data":"ad379234f3ace736704838c2e218205fef4811eb3f28006b80c7250418a85b2d"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.665975 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" event={"ID":"fc8be713-d12e-4289-adbd-a3aee9ebf603","Type":"ContainerStarted","Data":"7a0e001430a5c00214b6450d2cc69cb1e4212e18406606ff70202551b4783fa0"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.666014 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" event={"ID":"fc8be713-d12e-4289-adbd-a3aee9ebf603","Type":"ContainerStarted","Data":"86a15ed2fcae36cc10af4295550ea7620f03158656143a8e78d0225d77366cdd"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.666997 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.671110 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" event={"ID":"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd","Type":"ContainerStarted","Data":"1bf9c9d989b9f88d7db155936c15461b21f52013a0ae2bbf870e4813adaf1a7b"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.683084 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" event={"ID":"3bc7fab7-280b-4964-a1f0-51f0b59438ed","Type":"ContainerStarted","Data":"49a02a1e1db42ad39d329052c324474a8600e884a5e59e6b0ad66863b132da32"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.689814 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" event={"ID":"991621b1-366e-4d35-b1b7-6380e506ea08","Type":"ContainerStarted","Data":"75f0873ce480b9877a4e095ddf2245d0588780cd8c0631abcfe652fc4d33711f"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.709248 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" podStartSLOduration=10.709186219 podStartE2EDuration="10.709186219s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:41:41.697222068 +0000 UTC m=+834.054736220" watchObservedRunningTime="2025-11-24 12:41:41.709186219 +0000 UTC m=+834.066700361" Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.714219 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" event={"ID":"bfc7de9e-743d-4492-979c-7043fb8b41d1","Type":"ContainerStarted","Data":"154570f3d080d7bca9a27367e72d0fb22af7901d27d16adaf09e3395d596760e"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.714290 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.718855 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" podUID="bfc7de9e-743d-4492-979c-7043fb8b41d1" Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.722059 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" event={"ID":"2f99acab-4016-43dc-ab21-6d0c920def14","Type":"ContainerStarted","Data":"2e2d581d3be4e1a8777630e8cd466143d40033010f63054a911afb29760e9c8e"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.725187 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" event={"ID":"8d4269ad-a2ff-47be-bade-792bbf616cf2","Type":"ContainerStarted","Data":"64f3e34b81196548ce443c0789f8af5d4394e7d659f9477798407b6de8950b04"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.728060 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" event={"ID":"91ae544e-de6e-44e8-9119-eae33586fe56","Type":"ContainerStarted","Data":"f23e0c8a3fa109b8b46e4fb438997a4a16f748e9538018476a8bf9f9262c8e12"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.730104 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" event={"ID":"8465956b-6245-447e-adcd-7ba8367ca117","Type":"ContainerStarted","Data":"6d3d03175c3abac399b325c66630c6437b9629bc3a21303bb9b8a0c5bc8c122a"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.731036 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.738882 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" podUID="8465956b-6245-447e-adcd-7ba8367ca117" Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.740616 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" event={"ID":"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e","Type":"ContainerStarted","Data":"2375091b7b8f3d5ebef3aadbcda8005bea1db51a17883c784395d44052e7aba5"} Nov 24 12:41:41 crc kubenswrapper[4756]: I1124 12:41:41.742417 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:41 crc kubenswrapper[4756]: E1124 12:41:41.749329 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" podUID="4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e" Nov 24 12:41:42 crc kubenswrapper[4756]: E1124 12:41:42.759092 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" podUID="8465956b-6245-447e-adcd-7ba8367ca117" Nov 24 12:41:42 crc kubenswrapper[4756]: E1124 12:41:42.759527 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" podUID="bfc7de9e-743d-4492-979c-7043fb8b41d1" Nov 24 12:41:42 crc kubenswrapper[4756]: E1124 12:41:42.761021 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" podUID="4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.780959 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" event={"ID":"91ae544e-de6e-44e8-9119-eae33586fe56","Type":"ContainerStarted","Data":"a4e1e1d2be1b6f4821790b230fe0dfaf20e972abd89be55625c199b4153507c5"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.781478 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.782617 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" event={"ID":"991621b1-366e-4d35-b1b7-6380e506ea08","Type":"ContainerStarted","Data":"cf63c9a135555c520f3d01aee7b09b9b8131b672803c5de9a50164da40256ac7"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.782824 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.785318 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.785395 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.785875 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" event={"ID":"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f","Type":"ContainerStarted","Data":"bd4a45796f25c41b2869e383d45ae28a17a1fd872dc9ec8b674c7b94539f2975"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.785909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" event={"ID":"94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f","Type":"ContainerStarted","Data":"1cb03835fa4e7c93591f85fb7919ad24b843dc45aa779ac0a6ca8a4cd669d42e"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.785960 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.814131 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" event={"ID":"f874e9c8-d248-46c4-a1f2-8912827db14f","Type":"ContainerStarted","Data":"a1ca237556a4920d626c1660d2a75fbb2d8c5321e80f4f89766b33525632bd7d"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.814928 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.820237 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.821148 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" event={"ID":"e2224700-f8c7-4380-95c5-537e168c7e99","Type":"ContainerStarted","Data":"433f108d54b378c084480a8195dbcd99f5e8ddebcaaa7be127de2ff10d31dfb2"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.822221 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.825300 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.838539 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-w22c4" podStartSLOduration=3.940617274 podStartE2EDuration="15.838511922s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.484956114 +0000 UTC m=+824.842470256" lastFinishedPulling="2025-11-24 12:41:44.382850762 +0000 UTC m=+836.740364904" observedRunningTime="2025-11-24 12:41:45.798712061 +0000 UTC m=+838.156226203" watchObservedRunningTime="2025-11-24 12:41:45.838511922 +0000 UTC m=+838.196026064" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.844181 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" event={"ID":"09cf908e-b30f-47ea-a4d1-2e50a192289f","Type":"ContainerStarted","Data":"b7db2592c9a7431cdd5b7f03402d4be7579c3ce55197e5018d7ad84ac0b8583e"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.844373 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.849357 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.854309 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-9q9zc" podStartSLOduration=4.26832525 podStartE2EDuration="15.854216876s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.731889446 +0000 UTC m=+825.089403588" lastFinishedPulling="2025-11-24 12:41:44.317781072 +0000 UTC m=+836.675295214" observedRunningTime="2025-11-24 12:41:45.843994243 +0000 UTC m=+838.201508385" watchObservedRunningTime="2025-11-24 12:41:45.854216876 +0000 UTC m=+838.211731018" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.867609 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" event={"ID":"3bc7fab7-280b-4964-a1f0-51f0b59438ed","Type":"ContainerStarted","Data":"f366985a6c4009efadd909a81a394a8427caffee2c425437833d6abb538a63ec"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.869453 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.870271 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.875977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" event={"ID":"8d4269ad-a2ff-47be-bade-792bbf616cf2","Type":"ContainerStarted","Data":"72d8a36e48c15d991563214a5b2286971d991c34465d535d520e97e72ccf6b38"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.876737 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" podStartSLOduration=11.434093744 podStartE2EDuration="14.876719379s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:40.871433774 +0000 UTC m=+833.228947916" lastFinishedPulling="2025-11-24 12:41:44.314059409 +0000 UTC m=+836.671573551" observedRunningTime="2025-11-24 12:41:45.870020533 +0000 UTC m=+838.227534705" watchObservedRunningTime="2025-11-24 12:41:45.876719379 +0000 UTC m=+838.234233521" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.877113 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.882577 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" event={"ID":"81563dca-2369-4349-9881-b2031df19de0","Type":"ContainerStarted","Data":"da692c0ba584a7ab97c605b08c775bdc8b51af50312c0206c646659fc5cc292f"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.883791 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.886651 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.888619 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.891620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" event={"ID":"0aa7a2bc-482f-4ed4-820d-331ea6d971c7","Type":"ContainerStarted","Data":"242c1a425fe10132f66fdff36ba1d3138f87c8f5cf298c9128a73cf7bf6fc7de"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.892695 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.899702 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.901369 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" event={"ID":"2f99acab-4016-43dc-ab21-6d0c920def14","Type":"ContainerStarted","Data":"6786cae2ee7d0de0ad4f3eba0858417472aacb88286e613fa79f392fa505a3f9"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.902544 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.920235 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.923780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" event={"ID":"99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd","Type":"ContainerStarted","Data":"0d831d59c85194910f33fb9b24ccb622c6753514fd339b2b968a7e326b3b5654"} Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.924140 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-s2rwz" podStartSLOduration=4.028472074 podStartE2EDuration="15.92412431s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.741885022 +0000 UTC m=+825.099399164" lastFinishedPulling="2025-11-24 12:41:44.637537258 +0000 UTC m=+836.995051400" observedRunningTime="2025-11-24 12:41:45.904675672 +0000 UTC m=+838.262189814" watchObservedRunningTime="2025-11-24 12:41:45.92412431 +0000 UTC m=+838.281638462" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.924946 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.928050 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.937481 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-hbs6w" podStartSLOduration=4.577273136 podStartE2EDuration="15.937454749s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.128849597 +0000 UTC m=+825.486363739" lastFinishedPulling="2025-11-24 12:41:44.48903121 +0000 UTC m=+836.846545352" observedRunningTime="2025-11-24 12:41:45.930193058 +0000 UTC m=+838.287707210" watchObservedRunningTime="2025-11-24 12:41:45.937454749 +0000 UTC m=+838.294968901" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.960500 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-djhp4" podStartSLOduration=4.335901569 podStartE2EDuration="15.960481546s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.718568227 +0000 UTC m=+825.076082369" lastFinishedPulling="2025-11-24 12:41:44.343148204 +0000 UTC m=+836.700662346" observedRunningTime="2025-11-24 12:41:45.955405155 +0000 UTC m=+838.312919297" watchObservedRunningTime="2025-11-24 12:41:45.960481546 +0000 UTC m=+838.317995688" Nov 24 12:41:45 crc kubenswrapper[4756]: I1124 12:41:45.983370 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-m4r6n" podStartSLOduration=3.579148478 podStartE2EDuration="14.983349568s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.129926257 +0000 UTC m=+825.487440399" lastFinishedPulling="2025-11-24 12:41:44.534127347 +0000 UTC m=+836.891641489" observedRunningTime="2025-11-24 12:41:45.973740072 +0000 UTC m=+838.331254214" watchObservedRunningTime="2025-11-24 12:41:45.983349568 +0000 UTC m=+838.340863710" Nov 24 12:41:46 crc kubenswrapper[4756]: I1124 12:41:46.006439 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-p9zgm" podStartSLOduration=3.976913918 podStartE2EDuration="16.006406566s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.456379904 +0000 UTC m=+824.813894046" lastFinishedPulling="2025-11-24 12:41:44.485872552 +0000 UTC m=+836.843386694" observedRunningTime="2025-11-24 12:41:45.992782649 +0000 UTC m=+838.350296791" watchObservedRunningTime="2025-11-24 12:41:46.006406566 +0000 UTC m=+838.363920708" Nov 24 12:41:46 crc kubenswrapper[4756]: I1124 12:41:46.023431 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-s8jfx" podStartSLOduration=4.007531245 podStartE2EDuration="16.023400506s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.470139455 +0000 UTC m=+824.827653597" lastFinishedPulling="2025-11-24 12:41:44.486008716 +0000 UTC m=+836.843522858" observedRunningTime="2025-11-24 12:41:46.014596213 +0000 UTC m=+838.372110375" watchObservedRunningTime="2025-11-24 12:41:46.023400506 +0000 UTC m=+838.380914648" Nov 24 12:41:46 crc kubenswrapper[4756]: I1124 12:41:46.037640 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-5mpgm" podStartSLOduration=3.8760955519999998 podStartE2EDuration="15.037606429s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.161615083 +0000 UTC m=+825.519129225" lastFinishedPulling="2025-11-24 12:41:44.32312597 +0000 UTC m=+836.680640102" observedRunningTime="2025-11-24 12:41:46.036649783 +0000 UTC m=+838.394163935" watchObservedRunningTime="2025-11-24 12:41:46.037606429 +0000 UTC m=+838.395120581" Nov 24 12:41:46 crc kubenswrapper[4756]: I1124 12:41:46.088936 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-5srrx" podStartSLOduration=3.760381378 podStartE2EDuration="16.088918139s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:31.990147686 +0000 UTC m=+824.347661828" lastFinishedPulling="2025-11-24 12:41:44.318684437 +0000 UTC m=+836.676198589" observedRunningTime="2025-11-24 12:41:46.063617519 +0000 UTC m=+838.421131661" watchObservedRunningTime="2025-11-24 12:41:46.088918139 +0000 UTC m=+838.446432281" Nov 24 12:41:48 crc kubenswrapper[4756]: I1124 12:41:48.952560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" event={"ID":"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491","Type":"ContainerStarted","Data":"e5dcfae1f61bf0d142eee84fc373ebb6ce0d2e63eab5bf764ad20dde5df9f802"} Nov 24 12:41:48 crc kubenswrapper[4756]: I1124 12:41:48.960972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" event={"ID":"eb1f334a-14e2-4f63-8168-e5db902d8e70","Type":"ContainerStarted","Data":"0df0fb4b7caaa7a928d4206e596d0d0354180cf325e6dd6e3501a1bfa97eff31"} Nov 24 12:41:48 crc kubenswrapper[4756]: I1124 12:41:48.963616 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" event={"ID":"4c0ece30-ae1b-4706-861c-2ee51f7332d7","Type":"ContainerStarted","Data":"3ee8f04bfa110ccfbc75aa8b8780d0d50e5af74f384de3d66115de9cfcb0e99a"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.310630 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.331535 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-sh48c" podStartSLOduration=9.462131519 podStartE2EDuration="21.331516298s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:32.724352967 +0000 UTC m=+825.081867109" lastFinishedPulling="2025-11-24 12:41:44.593737736 +0000 UTC m=+836.951251888" observedRunningTime="2025-11-24 12:41:46.129015178 +0000 UTC m=+838.486529340" watchObservedRunningTime="2025-11-24 12:41:51.331516298 +0000 UTC m=+843.689030440" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.571315 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.593024 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.988942 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" event={"ID":"248f663c-2ddc-487f-a33c-9d7b9bad23be","Type":"ContainerStarted","Data":"cbec822488667b0fd291c51d9dfcdad4024e2762cba9df29bcc25240a25e4df4"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.990765 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" event={"ID":"4c0ece30-ae1b-4706-861c-2ee51f7332d7","Type":"ContainerStarted","Data":"1caa73c01a35fee0eb3611477ae95fcfbd4f471d3b275bfafa368851c1d0bc6d"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.990893 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.992503 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" event={"ID":"7480249a-d35a-4768-b5cc-daebd6f82c9b","Type":"ContainerStarted","Data":"e3c5d4e99b3d05c59bed737419c5c0e96ad8b7a0d13815dcea90c27adfc4b920"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.994401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" event={"ID":"8465956b-6245-447e-adcd-7ba8367ca117","Type":"ContainerStarted","Data":"f0c24010e496211c7657d97c1a2e8ea5f8d94426ff96454dc3399473fcd93cee"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.996233 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" event={"ID":"46c6804b-e74a-42d0-bc4e-2ffa7a5fa491","Type":"ContainerStarted","Data":"57073f0bb9a3252654c0deb9fbd6c82a79591952339f12697bf7d304c312463b"} Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.996363 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:51 crc kubenswrapper[4756]: I1124 12:41:51.998727 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" event={"ID":"4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e","Type":"ContainerStarted","Data":"931737a0ab94937f060ec385d43ccf121bb415a203b6aaacf91934603ec1b360"} Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.000397 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" event={"ID":"0624f295-ba46-4e28-9f0f-356cfbe6ecbc","Type":"ContainerStarted","Data":"6bd5c52e2ff553f7bba7414b8d53034fc5310051c2107abab8265c45a7377e46"} Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.008513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" event={"ID":"274dfe9d-6821-481f-a605-bf8fbf101f89","Type":"ContainerStarted","Data":"1900a5c96c06164afa4853b1e7c89317fe43e16720d21d30f22b3ac32c619253"} Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.011790 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" event={"ID":"bfc7de9e-743d-4492-979c-7043fb8b41d1","Type":"ContainerStarted","Data":"8190694280f516e4f80be736890cbb61f919c14b2a6fbdd4f8e03a4c068fc06e"} Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.016742 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" event={"ID":"eb1f334a-14e2-4f63-8168-e5db902d8e70","Type":"ContainerStarted","Data":"3aa5ef53dcade24dff1a102fd0c4e11522d7128ca40e1c20468582dffc0dbcee"} Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.016928 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.023453 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" podStartSLOduration=6.01674607 podStartE2EDuration="21.023430238s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.199136601 +0000 UTC m=+825.556650743" lastFinishedPulling="2025-11-24 12:41:48.205820769 +0000 UTC m=+840.563334911" observedRunningTime="2025-11-24 12:41:52.013482733 +0000 UTC m=+844.370996875" watchObservedRunningTime="2025-11-24 12:41:52.023430238 +0000 UTC m=+844.380944380" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.046182 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-xjntp" podStartSLOduration=14.916039402 podStartE2EDuration="22.046141276s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.1383372 +0000 UTC m=+825.495851342" lastFinishedPulling="2025-11-24 12:41:40.268439074 +0000 UTC m=+832.625953216" observedRunningTime="2025-11-24 12:41:52.040469819 +0000 UTC m=+844.397983981" watchObservedRunningTime="2025-11-24 12:41:52.046141276 +0000 UTC m=+844.403655418" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.073501 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kvg52" podStartSLOduration=2.806602836 podStartE2EDuration="21.073477523s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.189087393 +0000 UTC m=+825.546601535" lastFinishedPulling="2025-11-24 12:41:51.45596208 +0000 UTC m=+843.813476222" observedRunningTime="2025-11-24 12:41:52.066969523 +0000 UTC m=+844.424483685" watchObservedRunningTime="2025-11-24 12:41:52.073477523 +0000 UTC m=+844.430991665" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.137711 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" podStartSLOduration=6.121507148 podStartE2EDuration="21.137692009s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.174736946 +0000 UTC m=+825.532251088" lastFinishedPulling="2025-11-24 12:41:48.190921807 +0000 UTC m=+840.548435949" observedRunningTime="2025-11-24 12:41:52.135741235 +0000 UTC m=+844.493255387" watchObservedRunningTime="2025-11-24 12:41:52.137692009 +0000 UTC m=+844.495206151" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.156137 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-vwvff" podStartSLOduration=15.07753532 podStartE2EDuration="22.156111079s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.162868458 +0000 UTC m=+825.520382600" lastFinishedPulling="2025-11-24 12:41:40.241444207 +0000 UTC m=+832.598958359" observedRunningTime="2025-11-24 12:41:52.096385956 +0000 UTC m=+844.453900108" watchObservedRunningTime="2025-11-24 12:41:52.156111079 +0000 UTC m=+844.513625221" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.194665 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-kqtc5" podStartSLOduration=15.074929787 podStartE2EDuration="22.194643674s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.152689337 +0000 UTC m=+825.510203479" lastFinishedPulling="2025-11-24 12:41:40.272403234 +0000 UTC m=+832.629917366" observedRunningTime="2025-11-24 12:41:52.192871725 +0000 UTC m=+844.550385877" watchObservedRunningTime="2025-11-24 12:41:52.194643674 +0000 UTC m=+844.552157816" Nov 24 12:41:52 crc kubenswrapper[4756]: I1124 12:41:52.238861 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" podStartSLOduration=6.220213969 podStartE2EDuration="21.238832877s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.172497045 +0000 UTC m=+825.530011187" lastFinishedPulling="2025-11-24 12:41:48.191115953 +0000 UTC m=+840.548630095" observedRunningTime="2025-11-24 12:41:52.230033074 +0000 UTC m=+844.587547226" watchObservedRunningTime="2025-11-24 12:41:52.238832877 +0000 UTC m=+844.596347019" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.039925 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" event={"ID":"7480249a-d35a-4768-b5cc-daebd6f82c9b","Type":"ContainerStarted","Data":"afc79c6152a526832d76e5724976383b5730d5aab59837f045eeb584893b241c"} Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.039988 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.041966 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" event={"ID":"274dfe9d-6821-481f-a605-bf8fbf101f89","Type":"ContainerStarted","Data":"d7d89247eaecf4b2ea19f765109d7c9d782734c85151d7a18eb72f27d8f3a2af"} Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.042000 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.043963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" event={"ID":"248f663c-2ddc-487f-a33c-9d7b9bad23be","Type":"ContainerStarted","Data":"3adef8b18493c33ac17be048ae1ababf27a2fd2df2603316d7d40c7c41e1eaed"} Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.044433 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.047699 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7445f8dd59-46b2l" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.047748 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-l9fh4" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.048264 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-rb4l5" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.059015 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" podStartSLOduration=3.8064106410000003 podStartE2EDuration="22.058997702s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.201946219 +0000 UTC m=+825.559460361" lastFinishedPulling="2025-11-24 12:41:51.45453328 +0000 UTC m=+843.812047422" observedRunningTime="2025-11-24 12:41:53.055216209 +0000 UTC m=+845.412730351" watchObservedRunningTime="2025-11-24 12:41:53.058997702 +0000 UTC m=+845.416511844" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.079085 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" podStartSLOduration=3.8005325389999998 podStartE2EDuration="22.079066648s" podCreationTimestamp="2025-11-24 12:41:31 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.177852143 +0000 UTC m=+825.535366285" lastFinishedPulling="2025-11-24 12:41:51.456386252 +0000 UTC m=+843.813900394" observedRunningTime="2025-11-24 12:41:53.074873684 +0000 UTC m=+845.432387826" watchObservedRunningTime="2025-11-24 12:41:53.079066648 +0000 UTC m=+845.436580790" Nov 24 12:41:53 crc kubenswrapper[4756]: I1124 12:41:53.112910 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" podStartSLOduration=4.733398456 podStartE2EDuration="23.112885657s" podCreationTimestamp="2025-11-24 12:41:30 +0000 UTC" firstStartedPulling="2025-11-24 12:41:33.172608968 +0000 UTC m=+825.530123110" lastFinishedPulling="2025-11-24 12:41:51.552096169 +0000 UTC m=+843.909610311" observedRunningTime="2025-11-24 12:41:53.111823338 +0000 UTC m=+845.469337480" watchObservedRunningTime="2025-11-24 12:41:53.112885657 +0000 UTC m=+845.470399799" Nov 24 12:41:55 crc kubenswrapper[4756]: I1124 12:41:55.288465 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb" Nov 24 12:41:55 crc kubenswrapper[4756]: I1124 12:41:55.374221 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57bd844978-vgphd" Nov 24 12:42:01 crc kubenswrapper[4756]: I1124 12:42:01.471027 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-jgtkf" Nov 24 12:42:01 crc kubenswrapper[4756]: I1124 12:42:01.701434 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-dkjtg" Nov 24 12:42:01 crc kubenswrapper[4756]: I1124 12:42:01.938567 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-spm8x" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.385632 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.387795 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.390457 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.390759 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.390926 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.391491 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vtp4q" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.398955 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.469766 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.471142 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.473417 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.474073 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.474208 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn9qs\" (UniqueName: \"kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.484824 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.575902 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.575972 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn9qs\" (UniqueName: \"kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.575999 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2s6\" (UniqueName: \"kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.576041 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.576058 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.577370 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.596911 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn9qs\" (UniqueName: \"kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs\") pod \"dnsmasq-dns-675f4bcbfc-m7t44\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.677461 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2s6\" (UniqueName: \"kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.677813 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.677876 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.678715 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.678842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.694272 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2s6\" (UniqueName: \"kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6\") pod \"dnsmasq-dns-78dd6ddcc-8ctpg\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.710579 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:22 crc kubenswrapper[4756]: I1124 12:42:22.786458 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:23 crc kubenswrapper[4756]: I1124 12:42:23.135474 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:23 crc kubenswrapper[4756]: W1124 12:42:23.234540 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod920a6ee8_58e2_4db1_932a_85fc8c182594.slice/crio-169f9fe8b3f70d37922b90ba0ed02d6bad4f1c1f59dd0f1442fc05f294f728e4 WatchSource:0}: Error finding container 169f9fe8b3f70d37922b90ba0ed02d6bad4f1c1f59dd0f1442fc05f294f728e4: Status 404 returned error can't find the container with id 169f9fe8b3f70d37922b90ba0ed02d6bad4f1c1f59dd0f1442fc05f294f728e4 Nov 24 12:42:23 crc kubenswrapper[4756]: I1124 12:42:23.235419 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:23 crc kubenswrapper[4756]: I1124 12:42:23.281760 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" event={"ID":"ebfee982-57b4-4073-ba7c-bffd8321f09f","Type":"ContainerStarted","Data":"ffbcad068c3336bf4a82453df2cecf24026ef473556fdae43b06d918b98ea877"} Nov 24 12:42:23 crc kubenswrapper[4756]: I1124 12:42:23.283010 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" event={"ID":"920a6ee8-58e2-4db1-932a-85fc8c182594","Type":"ContainerStarted","Data":"169f9fe8b3f70d37922b90ba0ed02d6bad4f1c1f59dd0f1442fc05f294f728e4"} Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.016575 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.044284 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.046320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.085698 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.120989 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9fg\" (UniqueName: \"kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.121208 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.121334 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.222607 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.222712 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.222765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9fg\" (UniqueName: \"kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.224234 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.225253 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.254008 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9fg\" (UniqueName: \"kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg\") pod \"dnsmasq-dns-666b6646f7-nwqvv\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.380428 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.400858 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.429430 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.430950 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.445605 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.527425 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcnw\" (UniqueName: \"kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.527541 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.527613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.629922 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcnw\" (UniqueName: \"kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.630037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.630076 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.631411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.631839 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.654492 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcnw\" (UniqueName: \"kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw\") pod \"dnsmasq-dns-57d769cc4f-b5qpm\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:25 crc kubenswrapper[4756]: I1124 12:42:25.821480 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.038797 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:26 crc kubenswrapper[4756]: W1124 12:42:26.039450 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d08434_d30c_499b_9d05_4e9ca8fe28a1.slice/crio-c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb WatchSource:0}: Error finding container c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb: Status 404 returned error can't find the container with id c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.226557 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.232994 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.235998 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.236094 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.236239 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.236559 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k7csk" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.237068 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.237450 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.237676 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.254401 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.345254 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.345296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.345667 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.345696 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.346064 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.346089 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.346236 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.346883 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.347737 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.347883 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.348046 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxrz\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.351443 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.352979 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" event={"ID":"32d08434-d30c-499b-9d05-4e9ca8fe28a1","Type":"ContainerStarted","Data":"c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb"} Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.450866 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.450925 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451006 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cxrz\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451043 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451066 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451131 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451170 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451192 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451302 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.451376 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.455659 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.456067 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.456142 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.456926 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.457123 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.457681 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.460741 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.461544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.474893 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.475267 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.483274 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cxrz\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.489338 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.562843 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.569851 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.580012 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.580386 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.580570 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.580732 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.581124 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.581391 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.581688 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mprjs" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.584694 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.603314 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661303 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzv5k\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661356 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661387 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661413 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661447 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661471 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661495 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661536 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661557 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.661584 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764486 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764509 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764541 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764601 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzv5k\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764618 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764715 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.764891 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.779855 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.780384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.780432 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.781252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.781619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.790963 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.801543 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.802904 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.803949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.825003 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzv5k\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.831113 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:26 crc kubenswrapper[4756]: I1124 12:42:26.902794 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.887019 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.889014 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.895256 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.895577 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.895819 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.896079 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.896253 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gwpph" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.899140 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997746 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997815 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8gj\" (UniqueName: \"kubernetes.io/projected/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kube-api-access-dd8gj\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-default\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997867 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997889 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.997975 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:27 crc kubenswrapper[4756]: I1124 12:42:27.998004 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kolla-config\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.098995 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099098 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kolla-config\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099170 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099217 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8gj\" (UniqueName: \"kubernetes.io/projected/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kube-api-access-dd8gj\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099238 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-default\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.099934 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.100143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-config-data-default\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.100172 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.100907 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kolla-config\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.105461 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.108315 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.109329 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.120554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8gj\" (UniqueName: \"kubernetes.io/projected/ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45-kube-api-access-dd8gj\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.121907 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45\") " pod="openstack/openstack-galera-0" Nov 24 12:42:28 crc kubenswrapper[4756]: I1124 12:42:28.220985 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.109183 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.113423 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.115343 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xxftv" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.115880 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.115908 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.116274 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.119793 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218277 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfmf\" (UniqueName: \"kubernetes.io/projected/80020f7a-2503-4446-84ea-148cb2bac0be-kube-api-access-wlfmf\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218320 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218349 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218604 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218719 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218762 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218784 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.218825 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320209 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlfmf\" (UniqueName: \"kubernetes.io/projected/80020f7a-2503-4446-84ea-148cb2bac0be-kube-api-access-wlfmf\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320247 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320274 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320318 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320339 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320362 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.320382 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.321935 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.322415 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.322969 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.323259 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.323388 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80020f7a-2503-4446-84ea-148cb2bac0be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.331136 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.333044 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80020f7a-2503-4446-84ea-148cb2bac0be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.350097 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlfmf\" (UniqueName: \"kubernetes.io/projected/80020f7a-2503-4446-84ea-148cb2bac0be-kube-api-access-wlfmf\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.352179 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"80020f7a-2503-4446-84ea-148cb2bac0be\") " pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.436081 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.478203 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.480846 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.484721 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.484799 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q27n4" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.485575 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.526605 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.636913 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlds\" (UniqueName: \"kubernetes.io/projected/be09de3c-c143-4b11-98ca-45292b9b015c-kube-api-access-cxlds\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.637132 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.637346 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-kolla-config\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.637398 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-config-data\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.638327 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.739729 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.739813 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-kolla-config\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.739839 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-config-data\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.739873 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.739912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlds\" (UniqueName: \"kubernetes.io/projected/be09de3c-c143-4b11-98ca-45292b9b015c-kube-api-access-cxlds\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.740794 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-kolla-config\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.740936 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be09de3c-c143-4b11-98ca-45292b9b015c-config-data\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.745606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.756508 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlds\" (UniqueName: \"kubernetes.io/projected/be09de3c-c143-4b11-98ca-45292b9b015c-kube-api-access-cxlds\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.756708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/be09de3c-c143-4b11-98ca-45292b9b015c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"be09de3c-c143-4b11-98ca-45292b9b015c\") " pod="openstack/memcached-0" Nov 24 12:42:29 crc kubenswrapper[4756]: I1124 12:42:29.805896 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 12:42:30 crc kubenswrapper[4756]: I1124 12:42:30.388531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" event={"ID":"40d39949-cd58-4321-af4c-8427b4766e1e","Type":"ContainerStarted","Data":"a7ee4ce978dda854c2867609bc985029012d40ac5e93817be35fde307397f015"} Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.368314 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.370227 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.374616 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hdp9j" Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.384941 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.466095 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tdp\" (UniqueName: \"kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp\") pod \"kube-state-metrics-0\" (UID: \"b9447314-6235-4140-879b-cc20306cc7e1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.570570 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tdp\" (UniqueName: \"kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp\") pod \"kube-state-metrics-0\" (UID: \"b9447314-6235-4140-879b-cc20306cc7e1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.629061 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tdp\" (UniqueName: \"kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp\") pod \"kube-state-metrics-0\" (UID: \"b9447314-6235-4140-879b-cc20306cc7e1\") " pod="openstack/kube-state-metrics-0" Nov 24 12:42:31 crc kubenswrapper[4756]: I1124 12:42:31.699738 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.749672 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.751616 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.753800 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.754574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.754929 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d4qzx" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.755137 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.756049 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.765974 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.768617 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.896798 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.896847 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.896876 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67n62\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.897072 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.897185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.897213 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.897316 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:32 crc kubenswrapper[4756]: I1124 12:42:32.897429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055291 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055356 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055380 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67n62\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055452 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055505 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055623 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055677 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.055744 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.056767 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.059623 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.059666 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac3567aefb4ff022402a71c4c19bba7ed7a13b4fde27606ef830df8410391bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.060102 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.060615 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.061352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.063994 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.067912 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.082955 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67n62\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.114433 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.375047 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.705352 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lk9k"] Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.707637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.711579 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.711591 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.712031 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tm79g" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.718426 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k"] Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.725550 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-v5r4t"] Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.727583 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.757398 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v5r4t"] Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765335 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765428 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjplh\" (UniqueName: \"kubernetes.io/projected/f9af141a-c02a-4457-b68e-111765a62280-kube-api-access-jjplh\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765493 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-log-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765544 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9af141a-c02a-4457-b68e-111765a62280-scripts\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-ovn-controller-tls-certs\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.765733 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-combined-ca-bundle\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867451 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-lib\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867529 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-run\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867556 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867601 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjplh\" (UniqueName: \"kubernetes.io/projected/f9af141a-c02a-4457-b68e-111765a62280-kube-api-access-jjplh\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9af141a-c02a-4457-b68e-111765a62280-scripts\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-log\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867725 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-combined-ca-bundle\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rm22\" (UniqueName: \"kubernetes.io/projected/2d0b4104-d3c6-4219-b239-a52830b8429b-kube-api-access-9rm22\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d0b4104-d3c6-4219-b239-a52830b8429b-scripts\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867821 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-log-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.867942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-ovn-controller-tls-certs\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.868003 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-etc-ovs\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.868243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-log-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.868349 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run-ovn\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.868444 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f9af141a-c02a-4457-b68e-111765a62280-var-run\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.869737 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9af141a-c02a-4457-b68e-111765a62280-scripts\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.871657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-ovn-controller-tls-certs\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.871801 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9af141a-c02a-4457-b68e-111765a62280-combined-ca-bundle\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.886276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjplh\" (UniqueName: \"kubernetes.io/projected/f9af141a-c02a-4457-b68e-111765a62280-kube-api-access-jjplh\") pod \"ovn-controller-2lk9k\" (UID: \"f9af141a-c02a-4457-b68e-111765a62280\") " pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rm22\" (UniqueName: \"kubernetes.io/projected/2d0b4104-d3c6-4219-b239-a52830b8429b-kube-api-access-9rm22\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969230 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d0b4104-d3c6-4219-b239-a52830b8429b-scripts\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-etc-ovs\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969303 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-lib\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969345 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-run\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-log\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969618 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-log\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969702 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-etc-ovs\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969787 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-lib\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.969847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d0b4104-d3c6-4219-b239-a52830b8429b-var-run\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.972212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d0b4104-d3c6-4219-b239-a52830b8429b-scripts\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:33 crc kubenswrapper[4756]: I1124 12:42:33.986501 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rm22\" (UniqueName: \"kubernetes.io/projected/2d0b4104-d3c6-4219-b239-a52830b8429b-kube-api-access-9rm22\") pod \"ovn-controller-ovs-v5r4t\" (UID: \"2d0b4104-d3c6-4219-b239-a52830b8429b\") " pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:34 crc kubenswrapper[4756]: I1124 12:42:34.027478 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:34 crc kubenswrapper[4756]: I1124 12:42:34.059141 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.099155 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.105735 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.109365 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.109597 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.109744 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.111536 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.116126 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2rkj9" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.124593 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.255694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.255798 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.255973 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2hg\" (UniqueName: \"kubernetes.io/projected/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-kube-api-access-8n2hg\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.256062 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.256238 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.256268 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.256360 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.256649 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358220 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358278 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358308 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358392 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2hg\" (UniqueName: \"kubernetes.io/projected/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-kube-api-access-8n2hg\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358427 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358473 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.358934 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.359666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.359767 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.363013 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.368315 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.369799 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.370010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.380604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2hg\" (UniqueName: \"kubernetes.io/projected/f016c6c2-d6cf-42ff-a700-314a97bb1bcc-kube-api-access-8n2hg\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.409089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f016c6c2-d6cf-42ff-a700-314a97bb1bcc\") " pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.431761 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.846824 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.848148 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.851583 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.851910 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ss6dp" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.854703 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.854982 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.864096 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.968524 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.968915 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.968954 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hnt\" (UniqueName: \"kubernetes.io/projected/57506583-001b-4baf-b8b1-6cd4fc282472-kube-api-access-z4hnt\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.969062 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.969103 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.969129 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.969167 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-config\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:38 crc kubenswrapper[4756]: I1124 12:42:38.969191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.070979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hnt\" (UniqueName: \"kubernetes.io/projected/57506583-001b-4baf-b8b1-6cd4fc282472-kube-api-access-z4hnt\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071064 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071096 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071110 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071132 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-config\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071149 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071200 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.071266 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.072052 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.072210 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.073511 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-config\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.073915 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57506583-001b-4baf-b8b1-6cd4fc282472-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.075685 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.076072 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.078184 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57506583-001b-4baf-b8b1-6cd4fc282472-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.090362 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hnt\" (UniqueName: \"kubernetes.io/projected/57506583-001b-4baf-b8b1-6cd4fc282472-kube-api-access-z4hnt\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.095489 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"57506583-001b-4baf-b8b1-6cd4fc282472\") " pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:39 crc kubenswrapper[4756]: I1124 12:42:39.179140 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.390043 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.390324 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn9qs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-m7t44_openstack(ebfee982-57b4-4073-ba7c-bffd8321f09f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.392538 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" podUID="ebfee982-57b4-4073-ba7c-bffd8321f09f" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.402179 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.402363 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr2s6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8ctpg_openstack(920a6ee8-58e2-4db1-932a-85fc8c182594): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:42:41 crc kubenswrapper[4756]: E1124 12:42:41.403551 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" podUID="920a6ee8-58e2-4db1-932a-85fc8c182594" Nov 24 12:42:41 crc kubenswrapper[4756]: I1124 12:42:41.834024 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:42:41 crc kubenswrapper[4756]: I1124 12:42:41.956134 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: W1124 12:42:42.083290 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12075358_f893_49bc_9ace_dda0ce2865ec.slice/crio-b1461fd0949631826e77090ec0eae9459065eb5544e43d0567b1df6be0ef3789 WatchSource:0}: Error finding container b1461fd0949631826e77090ec0eae9459065eb5544e43d0567b1df6be0ef3789: Status 404 returned error can't find the container with id b1461fd0949631826e77090ec0eae9459065eb5544e43d0567b1df6be0ef3789 Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.084104 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.100131 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.137828 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.148972 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.155711 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn9qs\" (UniqueName: \"kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs\") pod \"ebfee982-57b4-4073-ba7c-bffd8321f09f\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.155808 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config\") pod \"ebfee982-57b4-4073-ba7c-bffd8321f09f\" (UID: \"ebfee982-57b4-4073-ba7c-bffd8321f09f\") " Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.157354 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config" (OuterVolumeSpecName: "config") pod "ebfee982-57b4-4073-ba7c-bffd8321f09f" (UID: "ebfee982-57b4-4073-ba7c-bffd8321f09f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.161377 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs" (OuterVolumeSpecName: "kube-api-access-fn9qs") pod "ebfee982-57b4-4073-ba7c-bffd8321f09f" (UID: "ebfee982-57b4-4073-ba7c-bffd8321f09f"). InnerVolumeSpecName "kube-api-access-fn9qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.163546 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: W1124 12:42:42.165067 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe09de3c_c143_4b11_98ca_45292b9b015c.slice/crio-b214622e08dc77576521264c6ddde324560d1e8b7c26006cfe23890950829a4f WatchSource:0}: Error finding container b214622e08dc77576521264c6ddde324560d1e8b7c26006cfe23890950829a4f: Status 404 returned error can't find the container with id b214622e08dc77576521264c6ddde324560d1e8b7c26006cfe23890950829a4f Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.257728 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc\") pod \"920a6ee8-58e2-4db1-932a-85fc8c182594\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.257808 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config\") pod \"920a6ee8-58e2-4db1-932a-85fc8c182594\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.257851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2s6\" (UniqueName: \"kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6\") pod \"920a6ee8-58e2-4db1-932a-85fc8c182594\" (UID: \"920a6ee8-58e2-4db1-932a-85fc8c182594\") " Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.258212 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn9qs\" (UniqueName: \"kubernetes.io/projected/ebfee982-57b4-4073-ba7c-bffd8321f09f-kube-api-access-fn9qs\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.258232 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfee982-57b4-4073-ba7c-bffd8321f09f-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.258738 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "920a6ee8-58e2-4db1-932a-85fc8c182594" (UID: "920a6ee8-58e2-4db1-932a-85fc8c182594"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.259061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config" (OuterVolumeSpecName: "config") pod "920a6ee8-58e2-4db1-932a-85fc8c182594" (UID: "920a6ee8-58e2-4db1-932a-85fc8c182594"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.261693 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6" (OuterVolumeSpecName: "kube-api-access-rr2s6") pod "920a6ee8-58e2-4db1-932a-85fc8c182594" (UID: "920a6ee8-58e2-4db1-932a-85fc8c182594"). InnerVolumeSpecName "kube-api-access-rr2s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.366637 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.366684 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/920a6ee8-58e2-4db1-932a-85fc8c182594-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.366697 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2s6\" (UniqueName: \"kubernetes.io/projected/920a6ee8-58e2-4db1-932a-85fc8c182594-kube-api-access-rr2s6\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.386792 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k"] Nov 24 12:42:42 crc kubenswrapper[4756]: W1124 12:42:42.397998 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda974f608_51c8_4650_be4a_fad42e19bd48.slice/crio-04dcc32c34cc748f808423958747c63e5b39cbce28794a16b2c3bbee82722082 WatchSource:0}: Error finding container 04dcc32c34cc748f808423958747c63e5b39cbce28794a16b2c3bbee82722082: Status 404 returned error can't find the container with id 04dcc32c34cc748f808423958747c63e5b39cbce28794a16b2c3bbee82722082 Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.418581 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.447464 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.488840 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 12:42:42 crc kubenswrapper[4756]: W1124 12:42:42.492766 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57506583_001b_4baf_b8b1_6cd4fc282472.slice/crio-87705f8a482464ed2bb5a744fe773aff0258f540764bf17d339767eee3586e60 WatchSource:0}: Error finding container 87705f8a482464ed2bb5a744fe773aff0258f540764bf17d339767eee3586e60: Status 404 returned error can't find the container with id 87705f8a482464ed2bb5a744fe773aff0258f540764bf17d339767eee3586e60 Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.545470 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" event={"ID":"ebfee982-57b4-4073-ba7c-bffd8321f09f","Type":"ContainerDied","Data":"ffbcad068c3336bf4a82453df2cecf24026ef473556fdae43b06d918b98ea877"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.545489 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7t44" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.549686 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" event={"ID":"40d39949-cd58-4321-af4c-8427b4766e1e","Type":"ContainerDied","Data":"e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.549593 4756 generic.go:334] "Generic (PLEG): container finished" podID="40d39949-cd58-4321-af4c-8427b4766e1e" containerID="e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b" exitCode=0 Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.552328 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerStarted","Data":"04dcc32c34cc748f808423958747c63e5b39cbce28794a16b2c3bbee82722082"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.555759 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" event={"ID":"920a6ee8-58e2-4db1-932a-85fc8c182594","Type":"ContainerDied","Data":"169f9fe8b3f70d37922b90ba0ed02d6bad4f1c1f59dd0f1442fc05f294f728e4"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.555865 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8ctpg" Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.557906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9447314-6235-4140-879b-cc20306cc7e1","Type":"ContainerStarted","Data":"1543bd20f0d1b4e0a124b60983fdbb5a9efeceb74fc53521d746d2ca6b05599a"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.560705 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"be09de3c-c143-4b11-98ca-45292b9b015c","Type":"ContainerStarted","Data":"b214622e08dc77576521264c6ddde324560d1e8b7c26006cfe23890950829a4f"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.563094 4756 generic.go:334] "Generic (PLEG): container finished" podID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerID="423110c585e8cb2f550255eb7fc27391711946b2e886b940dc84566ff9450513" exitCode=0 Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.563158 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" event={"ID":"32d08434-d30c-499b-9d05-4e9ca8fe28a1","Type":"ContainerDied","Data":"423110c585e8cb2f550255eb7fc27391711946b2e886b940dc84566ff9450513"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.569605 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"57506583-001b-4baf-b8b1-6cd4fc282472","Type":"ContainerStarted","Data":"87705f8a482464ed2bb5a744fe773aff0258f540764bf17d339767eee3586e60"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.579123 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerStarted","Data":"e122b3ad9fd1fddd946e5e75697f44cc9ff33c2ffa575390e749a455431f7ff1"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.580427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k" event={"ID":"f9af141a-c02a-4457-b68e-111765a62280","Type":"ContainerStarted","Data":"935e2cad6671603989b2a622749c455c5e51a09bb95dd69c4e0b076ef8ff16da"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.581763 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80020f7a-2503-4446-84ea-148cb2bac0be","Type":"ContainerStarted","Data":"164ff0a5d427c4f7390767ec9fd362793a4d3cc52384bb15845645912d22681d"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.585271 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerStarted","Data":"b1461fd0949631826e77090ec0eae9459065eb5544e43d0567b1df6be0ef3789"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.600362 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45","Type":"ContainerStarted","Data":"ecc082056cae101a96683a40c0fd2d1c723c25496d41222abd7aa706dd2b0603"} Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.618836 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.627543 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7t44"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.678802 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:42 crc kubenswrapper[4756]: I1124 12:42:42.685572 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8ctpg"] Nov 24 12:42:42 crc kubenswrapper[4756]: E1124 12:42:42.829229 4756 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 24 12:42:42 crc kubenswrapper[4756]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/32d08434-d30c-499b-9d05-4e9ca8fe28a1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 12:42:42 crc kubenswrapper[4756]: > podSandboxID="c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb" Nov 24 12:42:42 crc kubenswrapper[4756]: E1124 12:42:42.829746 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 12:42:42 crc kubenswrapper[4756]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v9fg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-nwqvv_openstack(32d08434-d30c-499b-9d05-4e9ca8fe28a1): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/32d08434-d30c-499b-9d05-4e9ca8fe28a1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 12:42:42 crc kubenswrapper[4756]: > logger="UnhandledError" Nov 24 12:42:42 crc kubenswrapper[4756]: E1124 12:42:42.831415 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/32d08434-d30c-499b-9d05-4e9ca8fe28a1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" Nov 24 12:42:43 crc kubenswrapper[4756]: I1124 12:42:43.435831 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v5r4t"] Nov 24 12:42:43 crc kubenswrapper[4756]: I1124 12:42:43.631347 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" event={"ID":"40d39949-cd58-4321-af4c-8427b4766e1e","Type":"ContainerStarted","Data":"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3"} Nov 24 12:42:43 crc kubenswrapper[4756]: I1124 12:42:43.671667 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" podStartSLOduration=6.421520239 podStartE2EDuration="18.671648156s" podCreationTimestamp="2025-11-24 12:42:25 +0000 UTC" firstStartedPulling="2025-11-24 12:42:29.389819939 +0000 UTC m=+881.747334081" lastFinishedPulling="2025-11-24 12:42:41.639947856 +0000 UTC m=+893.997461998" observedRunningTime="2025-11-24 12:42:43.666113856 +0000 UTC m=+896.023628018" watchObservedRunningTime="2025-11-24 12:42:43.671648156 +0000 UTC m=+896.029162298" Nov 24 12:42:44 crc kubenswrapper[4756]: I1124 12:42:44.487177 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920a6ee8-58e2-4db1-932a-85fc8c182594" path="/var/lib/kubelet/pods/920a6ee8-58e2-4db1-932a-85fc8c182594/volumes" Nov 24 12:42:44 crc kubenswrapper[4756]: I1124 12:42:44.487992 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebfee982-57b4-4073-ba7c-bffd8321f09f" path="/var/lib/kubelet/pods/ebfee982-57b4-4073-ba7c-bffd8321f09f/volumes" Nov 24 12:42:44 crc kubenswrapper[4756]: I1124 12:42:44.514914 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 12:42:44 crc kubenswrapper[4756]: I1124 12:42:44.640196 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:45 crc kubenswrapper[4756]: W1124 12:42:45.757224 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d0b4104_d3c6_4219_b239_a52830b8429b.slice/crio-69d9cfca6ccd096f6eab913e2e1b847ead8c6fd802c6faebe22ae7f8475e4619 WatchSource:0}: Error finding container 69d9cfca6ccd096f6eab913e2e1b847ead8c6fd802c6faebe22ae7f8475e4619: Status 404 returned error can't find the container with id 69d9cfca6ccd096f6eab913e2e1b847ead8c6fd802c6faebe22ae7f8475e4619 Nov 24 12:42:45 crc kubenswrapper[4756]: W1124 12:42:45.759091 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf016c6c2_d6cf_42ff_a700_314a97bb1bcc.slice/crio-a19aea4b859717adfdea94c084767e365f8a09b4e69e377e7384073b34bba224 WatchSource:0}: Error finding container a19aea4b859717adfdea94c084767e365f8a09b4e69e377e7384073b34bba224: Status 404 returned error can't find the container with id a19aea4b859717adfdea94c084767e365f8a09b4e69e377e7384073b34bba224 Nov 24 12:42:46 crc kubenswrapper[4756]: I1124 12:42:46.656339 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f016c6c2-d6cf-42ff-a700-314a97bb1bcc","Type":"ContainerStarted","Data":"a19aea4b859717adfdea94c084767e365f8a09b4e69e377e7384073b34bba224"} Nov 24 12:42:46 crc kubenswrapper[4756]: I1124 12:42:46.658790 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v5r4t" event={"ID":"2d0b4104-d3c6-4219-b239-a52830b8429b","Type":"ContainerStarted","Data":"69d9cfca6ccd096f6eab913e2e1b847ead8c6fd802c6faebe22ae7f8475e4619"} Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.698764 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" event={"ID":"32d08434-d30c-499b-9d05-4e9ca8fe28a1","Type":"ContainerStarted","Data":"c6bb3455801b5319e4ac7c41cb607d972a5551707fe02e2845e0b1ff2c0e5a15"} Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.699750 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.700099 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"be09de3c-c143-4b11-98ca-45292b9b015c","Type":"ContainerStarted","Data":"7f8782d7c06849151d630d39f1d00afe2176af8549913c19075dd2bb6fd17ec2"} Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.700436 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.702332 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80020f7a-2503-4446-84ea-148cb2bac0be","Type":"ContainerStarted","Data":"5a87e5949445cca2d64628767bd381ca62837ab51eda391e1696e94b78e8d491"} Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.707780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45","Type":"ContainerStarted","Data":"4f0315d75c8312a94da3cd4a3a7be181eb22758c21ab3a3a7749a1e61a00808c"} Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.728659 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" podStartSLOduration=10.180418941 podStartE2EDuration="25.728636633s" podCreationTimestamp="2025-11-24 12:42:25 +0000 UTC" firstStartedPulling="2025-11-24 12:42:26.043311628 +0000 UTC m=+878.400825770" lastFinishedPulling="2025-11-24 12:42:41.59152932 +0000 UTC m=+893.949043462" observedRunningTime="2025-11-24 12:42:50.722905057 +0000 UTC m=+903.080419209" watchObservedRunningTime="2025-11-24 12:42:50.728636633 +0000 UTC m=+903.086150775" Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.772053 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.132434128 podStartE2EDuration="21.772017162s" podCreationTimestamp="2025-11-24 12:42:29 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.167314102 +0000 UTC m=+894.524828244" lastFinishedPulling="2025-11-24 12:42:49.806897136 +0000 UTC m=+902.164411278" observedRunningTime="2025-11-24 12:42:50.768073805 +0000 UTC m=+903.125587957" watchObservedRunningTime="2025-11-24 12:42:50.772017162 +0000 UTC m=+903.129531304" Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.824252 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:42:50 crc kubenswrapper[4756]: I1124 12:42:50.892677 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:51 crc kubenswrapper[4756]: I1124 12:42:51.718384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"57506583-001b-4baf-b8b1-6cd4fc282472","Type":"ContainerStarted","Data":"ce16001e894f8b80f4012cc3a31c1608c0d930be9e2fd4b8bed0f4af690e45c9"} Nov 24 12:42:51 crc kubenswrapper[4756]: I1124 12:42:51.720822 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k" event={"ID":"f9af141a-c02a-4457-b68e-111765a62280","Type":"ContainerStarted","Data":"c205698d519d791b6b0614087c50c083680513ca0df642b307ab990c2f555324"} Nov 24 12:42:51 crc kubenswrapper[4756]: I1124 12:42:51.720963 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2lk9k" Nov 24 12:42:51 crc kubenswrapper[4756]: I1124 12:42:51.722837 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v5r4t" event={"ID":"2d0b4104-d3c6-4219-b239-a52830b8429b","Type":"ContainerStarted","Data":"b587c55fb814801a1e535f27d31d87c114ed139460769942082cfb17a1b8bfe4"} Nov 24 12:42:51 crc kubenswrapper[4756]: I1124 12:42:51.742998 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2lk9k" podStartSLOduration=11.329954342 podStartE2EDuration="18.742974657s" podCreationTimestamp="2025-11-24 12:42:33 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.393351887 +0000 UTC m=+894.750866029" lastFinishedPulling="2025-11-24 12:42:49.806372202 +0000 UTC m=+902.163886344" observedRunningTime="2025-11-24 12:42:51.734866637 +0000 UTC m=+904.092380779" watchObservedRunningTime="2025-11-24 12:42:51.742974657 +0000 UTC m=+904.100488799" Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.732194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerStarted","Data":"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317"} Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.733862 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f016c6c2-d6cf-42ff-a700-314a97bb1bcc","Type":"ContainerStarted","Data":"db352b8a774dfdf1a593c2c412a4068ff5639c361e2e63e2e9769d28a22b7cc1"} Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.736838 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerStarted","Data":"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277"} Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.740179 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d0b4104-d3c6-4219-b239-a52830b8429b" containerID="b587c55fb814801a1e535f27d31d87c114ed139460769942082cfb17a1b8bfe4" exitCode=0 Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.740376 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="dnsmasq-dns" containerID="cri-o://c6bb3455801b5319e4ac7c41cb607d972a5551707fe02e2845e0b1ff2c0e5a15" gracePeriod=10 Nov 24 12:42:52 crc kubenswrapper[4756]: I1124 12:42:52.741142 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v5r4t" event={"ID":"2d0b4104-d3c6-4219-b239-a52830b8429b","Type":"ContainerDied","Data":"b587c55fb814801a1e535f27d31d87c114ed139460769942082cfb17a1b8bfe4"} Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.751046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v5r4t" event={"ID":"2d0b4104-d3c6-4219-b239-a52830b8429b","Type":"ContainerStarted","Data":"3a360256d9e4b3594476108afc3297bc136c4568000fabc8f2297ffeb3ce6a3b"} Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.753638 4756 generic.go:334] "Generic (PLEG): container finished" podID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerID="c6bb3455801b5319e4ac7c41cb607d972a5551707fe02e2845e0b1ff2c0e5a15" exitCode=0 Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.753697 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" event={"ID":"32d08434-d30c-499b-9d05-4e9ca8fe28a1","Type":"ContainerDied","Data":"c6bb3455801b5319e4ac7c41cb607d972a5551707fe02e2845e0b1ff2c0e5a15"} Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.755239 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerStarted","Data":"8b0f3852d7aa52dc4c3486ff8f736b20e5083ed69d982e5959ac4f8b04a47b7b"} Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.759667 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9447314-6235-4140-879b-cc20306cc7e1","Type":"ContainerStarted","Data":"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c"} Nov 24 12:42:53 crc kubenswrapper[4756]: I1124 12:42:53.815420 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.623317183 podStartE2EDuration="22.815401164s" podCreationTimestamp="2025-11-24 12:42:31 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.422415197 +0000 UTC m=+894.779929349" lastFinishedPulling="2025-11-24 12:42:52.614499188 +0000 UTC m=+904.972013330" observedRunningTime="2025-11-24 12:42:53.794875636 +0000 UTC m=+906.152389788" watchObservedRunningTime="2025-11-24 12:42:53.815401164 +0000 UTC m=+906.172915306" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.367261 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.507388 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config\") pod \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.507445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v9fg\" (UniqueName: \"kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg\") pod \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.507796 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc\") pod \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\" (UID: \"32d08434-d30c-499b-9d05-4e9ca8fe28a1\") " Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.522379 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg" (OuterVolumeSpecName: "kube-api-access-2v9fg") pod "32d08434-d30c-499b-9d05-4e9ca8fe28a1" (UID: "32d08434-d30c-499b-9d05-4e9ca8fe28a1"). InnerVolumeSpecName "kube-api-access-2v9fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.543382 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32d08434-d30c-499b-9d05-4e9ca8fe28a1" (UID: "32d08434-d30c-499b-9d05-4e9ca8fe28a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.543445 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config" (OuterVolumeSpecName: "config") pod "32d08434-d30c-499b-9d05-4e9ca8fe28a1" (UID: "32d08434-d30c-499b-9d05-4e9ca8fe28a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.610367 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.610411 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d08434-d30c-499b-9d05-4e9ca8fe28a1-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.610429 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v9fg\" (UniqueName: \"kubernetes.io/projected/32d08434-d30c-499b-9d05-4e9ca8fe28a1-kube-api-access-2v9fg\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.770985 4756 generic.go:334] "Generic (PLEG): container finished" podID="ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45" containerID="4f0315d75c8312a94da3cd4a3a7be181eb22758c21ab3a3a7749a1e61a00808c" exitCode=0 Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.771069 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45","Type":"ContainerDied","Data":"4f0315d75c8312a94da3cd4a3a7be181eb22758c21ab3a3a7749a1e61a00808c"} Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.774142 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" event={"ID":"32d08434-d30c-499b-9d05-4e9ca8fe28a1","Type":"ContainerDied","Data":"c48a1460d7a825d92c84a354bab9b3d46a0ce200f9fe3bd1b8cf4a11a441f8fb"} Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.774220 4756 scope.go:117] "RemoveContainer" containerID="c6bb3455801b5319e4ac7c41cb607d972a5551707fe02e2845e0b1ff2c0e5a15" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.774313 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nwqvv" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.774908 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.831792 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.840220 4756 scope.go:117] "RemoveContainer" containerID="423110c585e8cb2f550255eb7fc27391711946b2e886b940dc84566ff9450513" Nov 24 12:42:54 crc kubenswrapper[4756]: I1124 12:42:54.840626 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nwqvv"] Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.786874 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f016c6c2-d6cf-42ff-a700-314a97bb1bcc","Type":"ContainerStarted","Data":"e428644e69dd213e155cbe53e88bc182ffb1f934696e56c783591a717b9ae98b"} Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.790285 4756 generic.go:334] "Generic (PLEG): container finished" podID="80020f7a-2503-4446-84ea-148cb2bac0be" containerID="5a87e5949445cca2d64628767bd381ca62837ab51eda391e1696e94b78e8d491" exitCode=0 Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.790371 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80020f7a-2503-4446-84ea-148cb2bac0be","Type":"ContainerDied","Data":"5a87e5949445cca2d64628767bd381ca62837ab51eda391e1696e94b78e8d491"} Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.794308 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v5r4t" event={"ID":"2d0b4104-d3c6-4219-b239-a52830b8429b","Type":"ContainerStarted","Data":"c40d7f72aa7dcc8ffb14f63050a9dcf286cd6fa91e9204433295a35cf3295a18"} Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.794462 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.794535 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.797360 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45","Type":"ContainerStarted","Data":"9b03722f33e03783518d42d5cf9fa66b97451ae8409800fd4cf4fbf259de5b2b"} Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.802760 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"57506583-001b-4baf-b8b1-6cd4fc282472","Type":"ContainerStarted","Data":"a8caa466ab9c643c514eb62f8db8147d6d4a343e421b2cf9fc7fa59b777bee02"} Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.824571 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.811440098 podStartE2EDuration="18.82453265s" podCreationTimestamp="2025-11-24 12:42:37 +0000 UTC" firstStartedPulling="2025-11-24 12:42:45.760361885 +0000 UTC m=+898.117876027" lastFinishedPulling="2025-11-24 12:42:54.773454437 +0000 UTC m=+907.130968579" observedRunningTime="2025-11-24 12:42:55.818951049 +0000 UTC m=+908.176465211" watchObservedRunningTime="2025-11-24 12:42:55.82453265 +0000 UTC m=+908.182046792" Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.851414 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-v5r4t" podStartSLOduration=18.797612521 podStartE2EDuration="22.851387219s" podCreationTimestamp="2025-11-24 12:42:33 +0000 UTC" firstStartedPulling="2025-11-24 12:42:45.759680977 +0000 UTC m=+898.117195119" lastFinishedPulling="2025-11-24 12:42:49.813455675 +0000 UTC m=+902.170969817" observedRunningTime="2025-11-24 12:42:55.85068384 +0000 UTC m=+908.208198002" watchObservedRunningTime="2025-11-24 12:42:55.851387219 +0000 UTC m=+908.208901371" Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.903305 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.243706092 podStartE2EDuration="29.90328394s" podCreationTimestamp="2025-11-24 12:42:26 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.147401591 +0000 UTC m=+894.504915733" lastFinishedPulling="2025-11-24 12:42:49.806979439 +0000 UTC m=+902.164493581" observedRunningTime="2025-11-24 12:42:55.879052411 +0000 UTC m=+908.236566553" watchObservedRunningTime="2025-11-24 12:42:55.90328394 +0000 UTC m=+908.260798082" Nov 24 12:42:55 crc kubenswrapper[4756]: I1124 12:42:55.920368 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.645491484 podStartE2EDuration="18.920348854s" podCreationTimestamp="2025-11-24 12:42:37 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.495262337 +0000 UTC m=+894.852776479" lastFinishedPulling="2025-11-24 12:42:54.770119707 +0000 UTC m=+907.127633849" observedRunningTime="2025-11-24 12:42:55.91983445 +0000 UTC m=+908.277348592" watchObservedRunningTime="2025-11-24 12:42:55.920348854 +0000 UTC m=+908.277862996" Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.433002 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.488907 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" path="/var/lib/kubelet/pods/32d08434-d30c-499b-9d05-4e9ca8fe28a1/volumes" Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.489691 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.813606 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"80020f7a-2503-4446-84ea-148cb2bac0be","Type":"ContainerStarted","Data":"b4324d9433665cdb424294fcfa819a2e0da7e90f0cf168adec08ca6e636f1a9b"} Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.814744 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:56 crc kubenswrapper[4756]: I1124 12:42:56.847196 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.187846008 podStartE2EDuration="28.847144188s" podCreationTimestamp="2025-11-24 12:42:28 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.146301851 +0000 UTC m=+894.503815993" lastFinishedPulling="2025-11-24 12:42:49.805600031 +0000 UTC m=+902.163114173" observedRunningTime="2025-11-24 12:42:56.837882447 +0000 UTC m=+909.195396679" watchObservedRunningTime="2025-11-24 12:42:56.847144188 +0000 UTC m=+909.204658370" Nov 24 12:42:57 crc kubenswrapper[4756]: I1124 12:42:57.180123 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:57 crc kubenswrapper[4756]: I1124 12:42:57.227503 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:57 crc kubenswrapper[4756]: I1124 12:42:57.820366 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:57 crc kubenswrapper[4756]: I1124 12:42:57.856993 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 12:42:57 crc kubenswrapper[4756]: I1124 12:42:57.877029 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.136142 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dxs87"] Nov 24 12:42:58 crc kubenswrapper[4756]: E1124 12:42:58.136551 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="init" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.136582 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="init" Nov 24 12:42:58 crc kubenswrapper[4756]: E1124 12:42:58.136602 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="dnsmasq-dns" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.136612 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="dnsmasq-dns" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.136802 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d08434-d30c-499b-9d05-4e9ca8fe28a1" containerName="dnsmasq-dns" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.138135 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.140003 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.149107 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dxs87"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.221462 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.222546 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.267270 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.267323 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.267376 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.267810 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jw8s\" (UniqueName: \"kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.303771 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dxs87"] Nov 24 12:42:58 crc kubenswrapper[4756]: E1124 12:42:58.304685 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-8jw8s ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" podUID="e59dba97-0552-43ad-b865-d93273c1d1e2" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.325067 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.326544 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.335688 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.335793 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.335951 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6rmfx" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.335862 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.342765 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.350580 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.352298 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.355311 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.371379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.371566 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.371699 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.371945 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jw8s\" (UniqueName: \"kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.372565 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.373214 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.373390 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.394485 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.407360 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jw8s\" (UniqueName: \"kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s\") pod \"dnsmasq-dns-5bf47b49b7-dxs87\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.417317 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jldfh"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.418513 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.420573 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.437293 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jldfh"] Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473591 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473721 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjfg\" (UniqueName: \"kubernetes.io/projected/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-kube-api-access-tpjfg\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473754 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473803 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473841 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473881 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5ss\" (UniqueName: \"kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.473949 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.474008 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.474058 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.474079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-scripts\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.474113 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.474134 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-config\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575060 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-scripts\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-combined-ca-bundle\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575175 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovs-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-config\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575231 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575257 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-config\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575279 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpjfg\" (UniqueName: \"kubernetes.io/projected/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-kube-api-access-tpjfg\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575317 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575336 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575356 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575382 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovn-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575493 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575558 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5ss\" (UniqueName: \"kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575599 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575664 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwxg7\" (UniqueName: \"kubernetes.io/projected/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-kube-api-access-fwxg7\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575715 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.575823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.576093 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-scripts\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.576849 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.577074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.577124 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.577399 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-config\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.578050 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.578184 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.579530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.579554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.583977 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.595027 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpjfg\" (UniqueName: \"kubernetes.io/projected/6f03b394-8de8-41e4-9cbe-a09bc8e922ad-kube-api-access-tpjfg\") pod \"ovn-northd-0\" (UID: \"6f03b394-8de8-41e4-9cbe-a09bc8e922ad\") " pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.597259 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5ss\" (UniqueName: \"kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss\") pod \"dnsmasq-dns-8554648995-qflvr\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.651452 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.677834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-config\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.677929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.677971 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovn-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.678048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwxg7\" (UniqueName: \"kubernetes.io/projected/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-kube-api-access-fwxg7\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.678134 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-combined-ca-bundle\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.678189 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovs-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.678505 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovs-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.679348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-config\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.679374 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-ovn-rundir\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.679947 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.685621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-combined-ca-bundle\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.685894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.708111 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwxg7\" (UniqueName: \"kubernetes.io/projected/fb0e2af7-5d32-48ae-9f03-91233a28ed8e-kube-api-access-fwxg7\") pod \"ovn-controller-metrics-jldfh\" (UID: \"fb0e2af7-5d32-48ae-9f03-91233a28ed8e\") " pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.758041 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jldfh" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.832957 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.860897 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.983908 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jw8s\" (UniqueName: \"kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s\") pod \"e59dba97-0552-43ad-b865-d93273c1d1e2\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.984083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config\") pod \"e59dba97-0552-43ad-b865-d93273c1d1e2\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.984144 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb\") pod \"e59dba97-0552-43ad-b865-d93273c1d1e2\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.984563 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc\") pod \"e59dba97-0552-43ad-b865-d93273c1d1e2\" (UID: \"e59dba97-0552-43ad-b865-d93273c1d1e2\") " Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.984669 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config" (OuterVolumeSpecName: "config") pod "e59dba97-0552-43ad-b865-d93273c1d1e2" (UID: "e59dba97-0552-43ad-b865-d93273c1d1e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.984981 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e59dba97-0552-43ad-b865-d93273c1d1e2" (UID: "e59dba97-0552-43ad-b865-d93273c1d1e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.985206 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e59dba97-0552-43ad-b865-d93273c1d1e2" (UID: "e59dba97-0552-43ad-b865-d93273c1d1e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.985752 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.985778 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.985790 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e59dba97-0552-43ad-b865-d93273c1d1e2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:58 crc kubenswrapper[4756]: I1124 12:42:58.990126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s" (OuterVolumeSpecName: "kube-api-access-8jw8s") pod "e59dba97-0552-43ad-b865-d93273c1d1e2" (UID: "e59dba97-0552-43ad-b865-d93273c1d1e2"). InnerVolumeSpecName "kube-api-access-8jw8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.087817 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jw8s\" (UniqueName: \"kubernetes.io/projected/e59dba97-0552-43ad-b865-d93273c1d1e2-kube-api-access-8jw8s\") on node \"crc\" DevicePath \"\"" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.218197 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.297879 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.305629 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jldfh"] Nov 24 12:42:59 crc kubenswrapper[4756]: W1124 12:42:59.306389 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb0e2af7_5d32_48ae_9f03_91233a28ed8e.slice/crio-c0b4416a8a76f465f929044b9648be05e47bc1165b37271130cd7cd640f27b80 WatchSource:0}: Error finding container c0b4416a8a76f465f929044b9648be05e47bc1165b37271130cd7cd640f27b80: Status 404 returned error can't find the container with id c0b4416a8a76f465f929044b9648be05e47bc1165b37271130cd7cd640f27b80 Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.437050 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.437363 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.807327 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.844053 4756 generic.go:334] "Generic (PLEG): container finished" podID="ee506822-f2d6-42b9-9eef-7ba547770249" containerID="07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d" exitCode=0 Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.844129 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-qflvr" event={"ID":"ee506822-f2d6-42b9-9eef-7ba547770249","Type":"ContainerDied","Data":"07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d"} Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.844204 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-qflvr" event={"ID":"ee506822-f2d6-42b9-9eef-7ba547770249","Type":"ContainerStarted","Data":"2862635584ade92003ddb6c7051c74eaba2399375728f9f888e08fc66f06f335"} Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.847948 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jldfh" event={"ID":"fb0e2af7-5d32-48ae-9f03-91233a28ed8e","Type":"ContainerStarted","Data":"27e5aa177635adc9ea330995881f2a44ede5f19ed9a4e2a63ca7d6600d3901cb"} Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.848007 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jldfh" event={"ID":"fb0e2af7-5d32-48ae-9f03-91233a28ed8e","Type":"ContainerStarted","Data":"c0b4416a8a76f465f929044b9648be05e47bc1165b37271130cd7cd640f27b80"} Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.848917 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6f03b394-8de8-41e4-9cbe-a09bc8e922ad","Type":"ContainerStarted","Data":"ee64f3a9fbaca41a40ea1b96892545a76d94f44e2bf80072840923c71835b9a3"} Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.848927 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-dxs87" Nov 24 12:42:59 crc kubenswrapper[4756]: I1124 12:42:59.923231 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jldfh" podStartSLOduration=1.923196197 podStartE2EDuration="1.923196197s" podCreationTimestamp="2025-11-24 12:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:42:59.9096901 +0000 UTC m=+912.267204242" watchObservedRunningTime="2025-11-24 12:42:59.923196197 +0000 UTC m=+912.280710339" Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.120253 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dxs87"] Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.126482 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-dxs87"] Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.490768 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59dba97-0552-43ad-b865-d93273c1d1e2" path="/var/lib/kubelet/pods/e59dba97-0552-43ad-b865-d93273c1d1e2/volumes" Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.668051 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.783613 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.874765 4756 generic.go:334] "Generic (PLEG): container finished" podID="a974f608-51c8-4650-be4a-fad42e19bd48" containerID="8b0f3852d7aa52dc4c3486ff8f736b20e5083ed69d982e5959ac4f8b04a47b7b" exitCode=0 Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.875307 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerDied","Data":"8b0f3852d7aa52dc4c3486ff8f736b20e5083ed69d982e5959ac4f8b04a47b7b"} Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.881916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6f03b394-8de8-41e4-9cbe-a09bc8e922ad","Type":"ContainerStarted","Data":"7902b3ee4d3059a2e6fb17b6b68b4ff621914e4fe0fc4263d6a52ca97bb89666"} Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.896645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-qflvr" event={"ID":"ee506822-f2d6-42b9-9eef-7ba547770249","Type":"ContainerStarted","Data":"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823"} Nov 24 12:43:00 crc kubenswrapper[4756]: I1124 12:43:00.930583 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-qflvr" podStartSLOduration=2.9305606920000002 podStartE2EDuration="2.930560692s" podCreationTimestamp="2025-11-24 12:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:43:00.926596904 +0000 UTC m=+913.284111056" watchObservedRunningTime="2025-11-24 12:43:00.930560692 +0000 UTC m=+913.288074834" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.708727 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.800259 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.844657 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.846015 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.877558 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.934670 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6f03b394-8de8-41e4-9cbe-a09bc8e922ad","Type":"ContainerStarted","Data":"67659c692442d994007d9a0aa5ec644da5e97f0ef1f939639d5f639e880763bd"} Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.934726 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.934765 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.946077 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.946137 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.946171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.946219 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmr7d\" (UniqueName: \"kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.946237 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:01 crc kubenswrapper[4756]: I1124 12:43:01.974415 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.71121142 podStartE2EDuration="3.974397238s" podCreationTimestamp="2025-11-24 12:42:58 +0000 UTC" firstStartedPulling="2025-11-24 12:42:59.230601321 +0000 UTC m=+911.588115463" lastFinishedPulling="2025-11-24 12:43:00.493787139 +0000 UTC m=+912.851301281" observedRunningTime="2025-11-24 12:43:01.970108712 +0000 UTC m=+914.327622854" watchObservedRunningTime="2025-11-24 12:43:01.974397238 +0000 UTC m=+914.331911380" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.011226 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-74cp2"] Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.012681 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.028009 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-6e16-account-create-66mlm"] Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.029965 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.033460 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.047585 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmr7d\" (UniqueName: \"kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.047636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.047768 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.051555 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.051622 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.051976 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-74cp2"] Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.053595 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.053886 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.058028 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.058128 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-6e16-account-create-66mlm"] Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.060897 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.084040 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmr7d\" (UniqueName: \"kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d\") pod \"dnsmasq-dns-b8fbc5445-5vk97\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.162202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlt2c\" (UniqueName: \"kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.162277 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9pv\" (UniqueName: \"kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.162344 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.162436 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.171595 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.263807 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.263912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.263982 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlt2c\" (UniqueName: \"kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.264024 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9pv\" (UniqueName: \"kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.265463 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.265590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.281429 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9pv\" (UniqueName: \"kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv\") pod \"watcher-6e16-account-create-66mlm\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.284049 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlt2c\" (UniqueName: \"kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c\") pod \"watcher-db-create-74cp2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.336537 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.355659 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.686385 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:43:02 crc kubenswrapper[4756]: W1124 12:43:02.712166 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf71255b_f52f_494b_8f54_fb4f4526a742.slice/crio-d3cac5cd8661e9ca5a7e4d138d41b7c52301c2ef307256f894dffa2e71ab1c5d WatchSource:0}: Error finding container d3cac5cd8661e9ca5a7e4d138d41b7c52301c2ef307256f894dffa2e71ab1c5d: Status 404 returned error can't find the container with id d3cac5cd8661e9ca5a7e4d138d41b7c52301c2ef307256f894dffa2e71ab1c5d Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.919586 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-74cp2"] Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.944521 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" event={"ID":"af71255b-f52f-494b-8f54-fb4f4526a742","Type":"ContainerStarted","Data":"d3cac5cd8661e9ca5a7e4d138d41b7c52301c2ef307256f894dffa2e71ab1c5d"} Nov 24 12:43:02 crc kubenswrapper[4756]: I1124 12:43:02.945079 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-qflvr" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="dnsmasq-dns" containerID="cri-o://e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823" gracePeriod=10 Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.047113 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.063057 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.065797 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.067990 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.068024 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.068085 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8kp9p" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.071931 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.088284 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-6e16-account-create-66mlm"] Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.186397 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrvl\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-kube-api-access-zwrvl\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.186546 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-lock\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.186586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-cache\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.186613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.186641 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.288992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrvl\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-kube-api-access-zwrvl\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.289109 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-lock\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.289149 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-cache\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.289190 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.289249 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.289941 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-lock\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.290057 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.290110 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.290126 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.290197 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:03.790173116 +0000 UTC m=+916.147687258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.290214 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9cf650c1-2692-4b3d-89c5-5e3e0178e213-cache\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.331407 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrvl\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-kube-api-access-zwrvl\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.362638 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.456438 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.594023 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb\") pod \"ee506822-f2d6-42b9-9eef-7ba547770249\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.594151 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config\") pod \"ee506822-f2d6-42b9-9eef-7ba547770249\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.594275 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw5ss\" (UniqueName: \"kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss\") pod \"ee506822-f2d6-42b9-9eef-7ba547770249\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.594443 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc\") pod \"ee506822-f2d6-42b9-9eef-7ba547770249\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.594516 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb\") pod \"ee506822-f2d6-42b9-9eef-7ba547770249\" (UID: \"ee506822-f2d6-42b9-9eef-7ba547770249\") " Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.596598 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.625894 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss" (OuterVolumeSpecName: "kube-api-access-lw5ss") pod "ee506822-f2d6-42b9-9eef-7ba547770249" (UID: "ee506822-f2d6-42b9-9eef-7ba547770249"). InnerVolumeSpecName "kube-api-access-lw5ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.680311 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4x5lq"] Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.680933 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="init" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.681050 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="init" Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.681148 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="dnsmasq-dns" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.681238 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="dnsmasq-dns" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.681500 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" containerName="dnsmasq-dns" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.682703 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.688722 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.689531 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.690764 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.692969 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4x5lq"] Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.698107 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw5ss\" (UniqueName: \"kubernetes.io/projected/ee506822-f2d6-42b9-9eef-7ba547770249-kube-api-access-lw5ss\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.718103 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config" (OuterVolumeSpecName: "config") pod "ee506822-f2d6-42b9-9eef-7ba547770249" (UID: "ee506822-f2d6-42b9-9eef-7ba547770249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.720436 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee506822-f2d6-42b9-9eef-7ba547770249" (UID: "ee506822-f2d6-42b9-9eef-7ba547770249"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.720822 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee506822-f2d6-42b9-9eef-7ba547770249" (UID: "ee506822-f2d6-42b9-9eef-7ba547770249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.722601 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee506822-f2d6-42b9-9eef-7ba547770249" (UID: "ee506822-f2d6-42b9-9eef-7ba547770249"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800510 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800567 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800601 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrpg\" (UniqueName: \"kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800734 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800757 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800802 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800812 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800820 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.800831 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee506822-f2d6-42b9-9eef-7ba547770249-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.800969 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.800982 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: E1124 12:43:03.801018 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:04.801005333 +0000 UTC m=+917.158519475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.833417 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902134 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902224 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902271 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrpg\" (UniqueName: \"kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902402 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902484 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.902866 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.903661 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.903902 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.905694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.906807 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.910736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.921090 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrpg\" (UniqueName: \"kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg\") pod \"swift-ring-rebalance-4x5lq\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.954769 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b941b63-700f-407d-b545-71c5b1e54f2c" containerID="552ffd6d6bad8035256d2cb371ce468e8538329f075cf3fe3591c0020a7e92c7" exitCode=0 Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.954859 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6e16-account-create-66mlm" event={"ID":"3b941b63-700f-407d-b545-71c5b1e54f2c","Type":"ContainerDied","Data":"552ffd6d6bad8035256d2cb371ce468e8538329f075cf3fe3591c0020a7e92c7"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.954916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6e16-account-create-66mlm" event={"ID":"3b941b63-700f-407d-b545-71c5b1e54f2c","Type":"ContainerStarted","Data":"773d33b1701ad0c588710b3a519296148d58a07c53e509370a1125af9012da77"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.957949 4756 generic.go:334] "Generic (PLEG): container finished" podID="ee506822-f2d6-42b9-9eef-7ba547770249" containerID="e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823" exitCode=0 Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.958019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-qflvr" event={"ID":"ee506822-f2d6-42b9-9eef-7ba547770249","Type":"ContainerDied","Data":"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.958026 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-qflvr" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.958042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-qflvr" event={"ID":"ee506822-f2d6-42b9-9eef-7ba547770249","Type":"ContainerDied","Data":"2862635584ade92003ddb6c7051c74eaba2399375728f9f888e08fc66f06f335"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.958060 4756 scope.go:117] "RemoveContainer" containerID="e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823" Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.961239 4756 generic.go:334] "Generic (PLEG): container finished" podID="af71255b-f52f-494b-8f54-fb4f4526a742" containerID="7712822a54e3ddeb1ddfac7d35d6093dc0978ce2e147c44641792a7eca82e607" exitCode=0 Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.961527 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" event={"ID":"af71255b-f52f-494b-8f54-fb4f4526a742","Type":"ContainerDied","Data":"7712822a54e3ddeb1ddfac7d35d6093dc0978ce2e147c44641792a7eca82e607"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.963028 4756 generic.go:334] "Generic (PLEG): container finished" podID="0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" containerID="0943e2249df9895df437f12d282b14a27f5f138d2cead318cfcf7dd916ccf6bc" exitCode=0 Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.963465 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-74cp2" event={"ID":"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2","Type":"ContainerDied","Data":"0943e2249df9895df437f12d282b14a27f5f138d2cead318cfcf7dd916ccf6bc"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.963487 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-74cp2" event={"ID":"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2","Type":"ContainerStarted","Data":"bdba0285c773636ce68e82c3d74dacca89a977787132bc7c40c15c7da3be5299"} Nov 24 12:43:03 crc kubenswrapper[4756]: I1124 12:43:03.988588 4756 scope.go:117] "RemoveContainer" containerID="07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.028327 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.034893 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-qflvr"] Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.048186 4756 scope.go:117] "RemoveContainer" containerID="e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823" Nov 24 12:43:04 crc kubenswrapper[4756]: E1124 12:43:04.048637 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823\": container with ID starting with e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823 not found: ID does not exist" containerID="e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.048679 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823"} err="failed to get container status \"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823\": rpc error: code = NotFound desc = could not find container \"e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823\": container with ID starting with e3ff4298783a0452d8d6ae11e017071ddcd1fb45c2a09fbfe9f78f8614549823 not found: ID does not exist" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.048704 4756 scope.go:117] "RemoveContainer" containerID="07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d" Nov 24 12:43:04 crc kubenswrapper[4756]: E1124 12:43:04.049178 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d\": container with ID starting with 07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d not found: ID does not exist" containerID="07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.049204 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d"} err="failed to get container status \"07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d\": rpc error: code = NotFound desc = could not find container \"07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d\": container with ID starting with 07ee2208547cdbac70931ced9dda63f71fa62556673b0decdc8160ee13bd699d not found: ID does not exist" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.199639 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.495965 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee506822-f2d6-42b9-9eef-7ba547770249" path="/var/lib/kubelet/pods/ee506822-f2d6-42b9-9eef-7ba547770249/volumes" Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.677575 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4x5lq"] Nov 24 12:43:04 crc kubenswrapper[4756]: W1124 12:43:04.684997 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e1e7fc3_fb76_4de7_8a1a_7b93e50faf07.slice/crio-c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27 WatchSource:0}: Error finding container c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27: Status 404 returned error can't find the container with id c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27 Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.827282 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:04 crc kubenswrapper[4756]: E1124 12:43:04.827490 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:04 crc kubenswrapper[4756]: E1124 12:43:04.827931 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:04 crc kubenswrapper[4756]: E1124 12:43:04.828019 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:06.827992101 +0000 UTC m=+919.185506243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.972651 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4x5lq" event={"ID":"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07","Type":"ContainerStarted","Data":"c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27"} Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.977103 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" event={"ID":"af71255b-f52f-494b-8f54-fb4f4526a742","Type":"ContainerStarted","Data":"162050712a6ce5a9edb1731cdcaaa1ec29485d901d92b1a2022f15e4259de83d"} Nov 24 12:43:04 crc kubenswrapper[4756]: I1124 12:43:04.994988 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" podStartSLOduration=3.9949693699999997 podStartE2EDuration="3.99496937s" podCreationTimestamp="2025-11-24 12:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:43:04.994030974 +0000 UTC m=+917.351545126" watchObservedRunningTime="2025-11-24 12:43:04.99496937 +0000 UTC m=+917.352483502" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.410941 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.415283 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.542129 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts\") pod \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.542227 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlt2c\" (UniqueName: \"kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c\") pod \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\" (UID: \"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2\") " Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.542347 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts\") pod \"3b941b63-700f-407d-b545-71c5b1e54f2c\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.542387 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9pv\" (UniqueName: \"kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv\") pod \"3b941b63-700f-407d-b545-71c5b1e54f2c\" (UID: \"3b941b63-700f-407d-b545-71c5b1e54f2c\") " Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.542951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" (UID: "0bbceb03-940a-4f42-8252-7d4f5ee1b4d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.543323 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b941b63-700f-407d-b545-71c5b1e54f2c" (UID: "3b941b63-700f-407d-b545-71c5b1e54f2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.549389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv" (OuterVolumeSpecName: "kube-api-access-hr9pv") pod "3b941b63-700f-407d-b545-71c5b1e54f2c" (UID: "3b941b63-700f-407d-b545-71c5b1e54f2c"). InnerVolumeSpecName "kube-api-access-hr9pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.561340 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c" (OuterVolumeSpecName: "kube-api-access-hlt2c") pod "0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" (UID: "0bbceb03-940a-4f42-8252-7d4f5ee1b4d2"). InnerVolumeSpecName "kube-api-access-hlt2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.644392 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b941b63-700f-407d-b545-71c5b1e54f2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.644427 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9pv\" (UniqueName: \"kubernetes.io/projected/3b941b63-700f-407d-b545-71c5b1e54f2c-kube-api-access-hr9pv\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.644438 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.644446 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlt2c\" (UniqueName: \"kubernetes.io/projected/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2-kube-api-access-hlt2c\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.989562 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-74cp2" event={"ID":"0bbceb03-940a-4f42-8252-7d4f5ee1b4d2","Type":"ContainerDied","Data":"bdba0285c773636ce68e82c3d74dacca89a977787132bc7c40c15c7da3be5299"} Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.989880 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdba0285c773636ce68e82c3d74dacca89a977787132bc7c40c15c7da3be5299" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.989589 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-74cp2" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.992961 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6e16-account-create-66mlm" event={"ID":"3b941b63-700f-407d-b545-71c5b1e54f2c","Type":"ContainerDied","Data":"773d33b1701ad0c588710b3a519296148d58a07c53e509370a1125af9012da77"} Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.993002 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773d33b1701ad0c588710b3a519296148d58a07c53e509370a1125af9012da77" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.993029 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6e16-account-create-66mlm" Nov 24 12:43:05 crc kubenswrapper[4756]: I1124 12:43:05.993035 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:06 crc kubenswrapper[4756]: I1124 12:43:06.874967 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:06 crc kubenswrapper[4756]: E1124 12:43:06.875225 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:06 crc kubenswrapper[4756]: E1124 12:43:06.875258 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:06 crc kubenswrapper[4756]: E1124 12:43:06.875343 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:10.875324915 +0000 UTC m=+923.232839057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.391199 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2kfkz"] Nov 24 12:43:09 crc kubenswrapper[4756]: E1124 12:43:09.392699 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" containerName="mariadb-database-create" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.392787 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" containerName="mariadb-database-create" Nov 24 12:43:09 crc kubenswrapper[4756]: E1124 12:43:09.392857 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b941b63-700f-407d-b545-71c5b1e54f2c" containerName="mariadb-account-create" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.392910 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b941b63-700f-407d-b545-71c5b1e54f2c" containerName="mariadb-account-create" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.393187 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" containerName="mariadb-database-create" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.393312 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b941b63-700f-407d-b545-71c5b1e54f2c" containerName="mariadb-account-create" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.394108 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.401819 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2kfkz"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.505231 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a2de-account-create-jjkvc"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.506795 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.511798 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.519372 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a2de-account-create-jjkvc"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.546125 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.546289 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpgv\" (UniqueName: \"kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.647516 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpgv\" (UniqueName: \"kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.647563 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfx26\" (UniqueName: \"kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.647626 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.647722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.648606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.666541 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpgv\" (UniqueName: \"kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv\") pod \"keystone-db-create-2kfkz\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.716096 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dpjcl"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.717931 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.725205 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dpjcl"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.726904 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.762888 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.763104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfx26\" (UniqueName: \"kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.764240 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.789969 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfx26\" (UniqueName: \"kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26\") pod \"keystone-a2de-account-create-jjkvc\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.831064 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.835657 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a0b9-account-create-9q7xc"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.837420 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.840704 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.849585 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0b9-account-create-9q7xc"] Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.865542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.865694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5vjj\" (UniqueName: \"kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.968319 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.968409 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5vjj\" (UniqueName: \"kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.969521 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.969551 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxz6\" (UniqueName: \"kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.970556 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:09 crc kubenswrapper[4756]: I1124 12:43:09.991099 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5vjj\" (UniqueName: \"kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj\") pod \"placement-db-create-dpjcl\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.040984 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rjhrm"] Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.042459 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.062427 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rjhrm"] Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.071393 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxz6\" (UniqueName: \"kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.071534 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.072464 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.083854 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.095758 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxz6\" (UniqueName: \"kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6\") pod \"placement-a0b9-account-create-9q7xc\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.161364 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7c0e-account-create-xsh7k"] Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.162710 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.165917 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.167689 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.175273 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpglf\" (UniqueName: \"kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.175314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.194946 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7c0e-account-create-xsh7k"] Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.277951 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpglf\" (UniqueName: \"kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.278046 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.278202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.278267 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tfm\" (UniqueName: \"kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.279921 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.311305 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpglf\" (UniqueName: \"kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf\") pod \"glance-db-create-rjhrm\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.370094 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.379791 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.379879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tfm\" (UniqueName: \"kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.380839 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.407063 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tfm\" (UniqueName: \"kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm\") pod \"glance-7c0e-account-create-xsh7k\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.481152 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:10 crc kubenswrapper[4756]: I1124 12:43:10.891611 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:10 crc kubenswrapper[4756]: E1124 12:43:10.891951 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:10 crc kubenswrapper[4756]: E1124 12:43:10.891970 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:10 crc kubenswrapper[4756]: E1124 12:43:10.892035 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:18.892012394 +0000 UTC m=+931.249526536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:12 crc kubenswrapper[4756]: I1124 12:43:12.174399 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:43:12 crc kubenswrapper[4756]: I1124 12:43:12.238610 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:43:12 crc kubenswrapper[4756]: I1124 12:43:12.239365 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="dnsmasq-dns" containerID="cri-o://22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3" gracePeriod=10 Nov 24 12:43:13 crc kubenswrapper[4756]: I1124 12:43:13.704413 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 12:43:15 crc kubenswrapper[4756]: I1124 12:43:15.823959 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: connect: connection refused" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.547508 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.693534 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcnw\" (UniqueName: \"kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw\") pod \"40d39949-cd58-4321-af4c-8427b4766e1e\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.693917 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc\") pod \"40d39949-cd58-4321-af4c-8427b4766e1e\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.694125 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config\") pod \"40d39949-cd58-4321-af4c-8427b4766e1e\" (UID: \"40d39949-cd58-4321-af4c-8427b4766e1e\") " Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.697608 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw" (OuterVolumeSpecName: "kube-api-access-ldcnw") pod "40d39949-cd58-4321-af4c-8427b4766e1e" (UID: "40d39949-cd58-4321-af4c-8427b4766e1e"). InnerVolumeSpecName "kube-api-access-ldcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.759931 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40d39949-cd58-4321-af4c-8427b4766e1e" (UID: "40d39949-cd58-4321-af4c-8427b4766e1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.770434 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config" (OuterVolumeSpecName: "config") pod "40d39949-cd58-4321-af4c-8427b4766e1e" (UID: "40d39949-cd58-4321-af4c-8427b4766e1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.795684 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.795713 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcnw\" (UniqueName: \"kubernetes.io/projected/40d39949-cd58-4321-af4c-8427b4766e1e-kube-api-access-ldcnw\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.795725 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40d39949-cd58-4321-af4c-8427b4766e1e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:16 crc kubenswrapper[4756]: I1124 12:43:16.823761 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0b9-account-create-9q7xc"] Nov 24 12:43:16 crc kubenswrapper[4756]: W1124 12:43:16.829021 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c19843e_ef57_4ca4_bc56_992f31cc5a87.slice/crio-8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc WatchSource:0}: Error finding container 8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc: Status 404 returned error can't find the container with id 8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.017335 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2kfkz"] Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.027034 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dpjcl"] Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.038134 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a2de-account-create-jjkvc"] Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.045456 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7c0e-account-create-xsh7k"] Nov 24 12:43:17 crc kubenswrapper[4756]: W1124 12:43:17.052718 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7050330_b07e_4f5a_9fca_3ad560a9cb19.slice/crio-84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489 WatchSource:0}: Error finding container 84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489: Status 404 returned error can't find the container with id 84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489 Nov 24 12:43:17 crc kubenswrapper[4756]: W1124 12:43:17.057778 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde63eecd_9c64_459c_a274_a8bfc7362544.slice/crio-45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa WatchSource:0}: Error finding container 45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa: Status 404 returned error can't find the container with id 45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.108986 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2de-account-create-jjkvc" event={"ID":"b1b08308-274c-46b2-a129-568fc7acc250","Type":"ContainerStarted","Data":"2f0383029c5295de9ff7bbb73334ba379aaaa9757f6ade121afd85b23f99e270"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.110455 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2kfkz" event={"ID":"84d8e682-5102-429f-8b52-c8c962ef8ebd","Type":"ContainerStarted","Data":"1e9d3ae21889666ccb4a5a23f74ea91a3269992fda0b852a96f9f3fd83e22052"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.115535 4756 generic.go:334] "Generic (PLEG): container finished" podID="40d39949-cd58-4321-af4c-8427b4766e1e" containerID="22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3" exitCode=0 Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.115642 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" event={"ID":"40d39949-cd58-4321-af4c-8427b4766e1e","Type":"ContainerDied","Data":"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.115675 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" event={"ID":"40d39949-cd58-4321-af4c-8427b4766e1e","Type":"ContainerDied","Data":"a7ee4ce978dda854c2867609bc985029012d40ac5e93817be35fde307397f015"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.115698 4756 scope.go:117] "RemoveContainer" containerID="22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.115836 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b5qpm" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.120892 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerStarted","Data":"cb991172ccc18a386f0a25025a7e6959f41f8e598efadcf34da8f1091727d15b"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.122933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dpjcl" event={"ID":"b7050330-b07e-4f5a-9fca-3ad560a9cb19","Type":"ContainerStarted","Data":"84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.124505 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0b9-account-create-9q7xc" event={"ID":"4c19843e-ef57-4ca4-bc56-992f31cc5a87","Type":"ContainerStarted","Data":"39c533b88456aa917835fbf971f968dcefd6361f4a1dbb4218c52e9f3aaa203f"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.124539 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0b9-account-create-9q7xc" event={"ID":"4c19843e-ef57-4ca4-bc56-992f31cc5a87","Type":"ContainerStarted","Data":"8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.127074 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4x5lq" event={"ID":"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07","Type":"ContainerStarted","Data":"2fe70ee52b2025b74bf8a4acb2dc0fb8bf610f176cf4b69cbfcd7ab01460a8d4"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.130715 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7c0e-account-create-xsh7k" event={"ID":"de63eecd-9c64-459c-a274-a8bfc7362544","Type":"ContainerStarted","Data":"45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa"} Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.140470 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a0b9-account-create-9q7xc" podStartSLOduration=8.140450882 podStartE2EDuration="8.140450882s" podCreationTimestamp="2025-11-24 12:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:43:17.139723693 +0000 UTC m=+929.497237845" watchObservedRunningTime="2025-11-24 12:43:17.140450882 +0000 UTC m=+929.497965024" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.159837 4756 scope.go:117] "RemoveContainer" containerID="e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.181807 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4x5lq" podStartSLOduration=2.452549519 podStartE2EDuration="14.181778636s" podCreationTimestamp="2025-11-24 12:43:03 +0000 UTC" firstStartedPulling="2025-11-24 12:43:04.693271398 +0000 UTC m=+917.050785540" lastFinishedPulling="2025-11-24 12:43:16.422500515 +0000 UTC m=+928.780014657" observedRunningTime="2025-11-24 12:43:17.16720654 +0000 UTC m=+929.524720692" watchObservedRunningTime="2025-11-24 12:43:17.181778636 +0000 UTC m=+929.539292788" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.223589 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rjhrm"] Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.229753 4756 scope.go:117] "RemoveContainer" containerID="22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3" Nov 24 12:43:17 crc kubenswrapper[4756]: E1124 12:43:17.231614 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3\": container with ID starting with 22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3 not found: ID does not exist" containerID="22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.231661 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3"} err="failed to get container status \"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3\": rpc error: code = NotFound desc = could not find container \"22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3\": container with ID starting with 22e34f39f64c91689f7328f1d4e2f85bac60df7a288f186537734a53f1f098e3 not found: ID does not exist" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.231687 4756 scope.go:117] "RemoveContainer" containerID="e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.231768 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:43:17 crc kubenswrapper[4756]: E1124 12:43:17.232797 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b\": container with ID starting with e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b not found: ID does not exist" containerID="e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.232849 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b"} err="failed to get container status \"e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b\": rpc error: code = NotFound desc = could not find container \"e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b\": container with ID starting with e7eaaa35d560206746f37d71a627999e4a7afee71427eaae44991296945b495b not found: ID does not exist" Nov 24 12:43:17 crc kubenswrapper[4756]: I1124 12:43:17.237274 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b5qpm"] Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.139693 4756 generic.go:334] "Generic (PLEG): container finished" podID="b7050330-b07e-4f5a-9fca-3ad560a9cb19" containerID="d6e635291fb4952799c94031342ad1e35f4ba9412187c6a0ee98486367d43249" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.139793 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dpjcl" event={"ID":"b7050330-b07e-4f5a-9fca-3ad560a9cb19","Type":"ContainerDied","Data":"d6e635291fb4952799c94031342ad1e35f4ba9412187c6a0ee98486367d43249"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.142974 4756 generic.go:334] "Generic (PLEG): container finished" podID="4c19843e-ef57-4ca4-bc56-992f31cc5a87" containerID="39c533b88456aa917835fbf971f968dcefd6361f4a1dbb4218c52e9f3aaa203f" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.143053 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0b9-account-create-9q7xc" event={"ID":"4c19843e-ef57-4ca4-bc56-992f31cc5a87","Type":"ContainerDied","Data":"39c533b88456aa917835fbf971f968dcefd6361f4a1dbb4218c52e9f3aaa203f"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.144701 4756 generic.go:334] "Generic (PLEG): container finished" podID="de63eecd-9c64-459c-a274-a8bfc7362544" containerID="f54a8507680690dab2940467b788218d574a40a87b42ab5a070716690d524a57" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.144781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7c0e-account-create-xsh7k" event={"ID":"de63eecd-9c64-459c-a274-a8bfc7362544","Type":"ContainerDied","Data":"f54a8507680690dab2940467b788218d574a40a87b42ab5a070716690d524a57"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.146682 4756 generic.go:334] "Generic (PLEG): container finished" podID="b1b08308-274c-46b2-a129-568fc7acc250" containerID="f63679fc5389a2d3c3c47ffe215b72526b85106d4f3db6cda64d71396b13b728" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.146752 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2de-account-create-jjkvc" event={"ID":"b1b08308-274c-46b2-a129-568fc7acc250","Type":"ContainerDied","Data":"f63679fc5389a2d3c3c47ffe215b72526b85106d4f3db6cda64d71396b13b728"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.150072 4756 generic.go:334] "Generic (PLEG): container finished" podID="84d8e682-5102-429f-8b52-c8c962ef8ebd" containerID="80fb22fcba05b490951fdd353a8a5c70589aa4dc6ef60a3d5e660b94a566e63c" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.150113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2kfkz" event={"ID":"84d8e682-5102-429f-8b52-c8c962ef8ebd","Type":"ContainerDied","Data":"80fb22fcba05b490951fdd353a8a5c70589aa4dc6ef60a3d5e660b94a566e63c"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.151763 4756 generic.go:334] "Generic (PLEG): container finished" podID="e5662dd7-e194-4cff-8d36-a51bd442adc9" containerID="5b82b9892f191b666d8b836a948eba8f900b7213faa6b85c296d5b8c66e7baae" exitCode=0 Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.151795 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjhrm" event={"ID":"e5662dd7-e194-4cff-8d36-a51bd442adc9","Type":"ContainerDied","Data":"5b82b9892f191b666d8b836a948eba8f900b7213faa6b85c296d5b8c66e7baae"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.151850 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjhrm" event={"ID":"e5662dd7-e194-4cff-8d36-a51bd442adc9","Type":"ContainerStarted","Data":"4890ac68d156bb59b6e2bf63fffcda6ee6afc0d260bd414f3a79b569e0c392ef"} Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.486295 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" path="/var/lib/kubelet/pods/40d39949-cd58-4321-af4c-8427b4766e1e/volumes" Nov 24 12:43:18 crc kubenswrapper[4756]: I1124 12:43:18.949345 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:18 crc kubenswrapper[4756]: E1124 12:43:18.949560 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 12:43:18 crc kubenswrapper[4756]: E1124 12:43:18.949583 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 12:43:18 crc kubenswrapper[4756]: E1124 12:43:18.949645 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift podName:9cf650c1-2692-4b3d-89c5-5e3e0178e213 nodeName:}" failed. No retries permitted until 2025-11-24 12:43:34.949621492 +0000 UTC m=+947.307135634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift") pod "swift-storage-0" (UID: "9cf650c1-2692-4b3d-89c5-5e3e0178e213") : configmap "swift-ring-files" not found Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.162675 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerStarted","Data":"66ea414b6b36b7923af44699931c246763f83afe867a1b52f8bdfc171dbedc3c"} Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.544263 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.667189 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxz6\" (UniqueName: \"kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6\") pod \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.667312 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts\") pod \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\" (UID: \"4c19843e-ef57-4ca4-bc56-992f31cc5a87\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.669114 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c19843e-ef57-4ca4-bc56-992f31cc5a87" (UID: "4c19843e-ef57-4ca4-bc56-992f31cc5a87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.687473 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6" (OuterVolumeSpecName: "kube-api-access-njxz6") pod "4c19843e-ef57-4ca4-bc56-992f31cc5a87" (UID: "4c19843e-ef57-4ca4-bc56-992f31cc5a87"). InnerVolumeSpecName "kube-api-access-njxz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.769659 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njxz6\" (UniqueName: \"kubernetes.io/projected/4c19843e-ef57-4ca4-bc56-992f31cc5a87-kube-api-access-njxz6\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.769696 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c19843e-ef57-4ca4-bc56-992f31cc5a87-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.821640 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.824577 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.838327 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.862663 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.868310 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871442 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts\") pod \"de63eecd-9c64-459c-a274-a8bfc7362544\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871514 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5tfm\" (UniqueName: \"kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm\") pod \"de63eecd-9c64-459c-a274-a8bfc7362544\" (UID: \"de63eecd-9c64-459c-a274-a8bfc7362544\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts\") pod \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871567 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts\") pod \"b1b08308-274c-46b2-a129-568fc7acc250\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871587 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5vjj\" (UniqueName: \"kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj\") pod \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\" (UID: \"b7050330-b07e-4f5a-9fca-3ad560a9cb19\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871605 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqpgv\" (UniqueName: \"kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv\") pod \"84d8e682-5102-429f-8b52-c8c962ef8ebd\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871634 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpglf\" (UniqueName: \"kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf\") pod \"e5662dd7-e194-4cff-8d36-a51bd442adc9\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871674 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfx26\" (UniqueName: \"kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26\") pod \"b1b08308-274c-46b2-a129-568fc7acc250\" (UID: \"b1b08308-274c-46b2-a129-568fc7acc250\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871708 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts\") pod \"e5662dd7-e194-4cff-8d36-a51bd442adc9\" (UID: \"e5662dd7-e194-4cff-8d36-a51bd442adc9\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.871729 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts\") pod \"84d8e682-5102-429f-8b52-c8c962ef8ebd\" (UID: \"84d8e682-5102-429f-8b52-c8c962ef8ebd\") " Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.872626 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84d8e682-5102-429f-8b52-c8c962ef8ebd" (UID: "84d8e682-5102-429f-8b52-c8c962ef8ebd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.874023 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de63eecd-9c64-459c-a274-a8bfc7362544" (UID: "de63eecd-9c64-459c-a274-a8bfc7362544"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.885505 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7050330-b07e-4f5a-9fca-3ad560a9cb19" (UID: "b7050330-b07e-4f5a-9fca-3ad560a9cb19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.889467 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1b08308-274c-46b2-a129-568fc7acc250" (UID: "b1b08308-274c-46b2-a129-568fc7acc250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.896378 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5662dd7-e194-4cff-8d36-a51bd442adc9" (UID: "e5662dd7-e194-4cff-8d36-a51bd442adc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.899202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv" (OuterVolumeSpecName: "kube-api-access-hqpgv") pod "84d8e682-5102-429f-8b52-c8c962ef8ebd" (UID: "84d8e682-5102-429f-8b52-c8c962ef8ebd"). InnerVolumeSpecName "kube-api-access-hqpgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.900881 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj" (OuterVolumeSpecName: "kube-api-access-z5vjj") pod "b7050330-b07e-4f5a-9fca-3ad560a9cb19" (UID: "b7050330-b07e-4f5a-9fca-3ad560a9cb19"). InnerVolumeSpecName "kube-api-access-z5vjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.909123 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm" (OuterVolumeSpecName: "kube-api-access-w5tfm") pod "de63eecd-9c64-459c-a274-a8bfc7362544" (UID: "de63eecd-9c64-459c-a274-a8bfc7362544"). InnerVolumeSpecName "kube-api-access-w5tfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.910443 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf" (OuterVolumeSpecName: "kube-api-access-cpglf") pod "e5662dd7-e194-4cff-8d36-a51bd442adc9" (UID: "e5662dd7-e194-4cff-8d36-a51bd442adc9"). InnerVolumeSpecName "kube-api-access-cpglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.910821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26" (OuterVolumeSpecName: "kube-api-access-gfx26") pod "b1b08308-274c-46b2-a129-568fc7acc250" (UID: "b1b08308-274c-46b2-a129-568fc7acc250"). InnerVolumeSpecName "kube-api-access-gfx26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.972893 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de63eecd-9c64-459c-a274-a8bfc7362544-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973224 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5tfm\" (UniqueName: \"kubernetes.io/projected/de63eecd-9c64-459c-a274-a8bfc7362544-kube-api-access-w5tfm\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973299 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7050330-b07e-4f5a-9fca-3ad560a9cb19-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973361 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1b08308-274c-46b2-a129-568fc7acc250-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973420 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5vjj\" (UniqueName: \"kubernetes.io/projected/b7050330-b07e-4f5a-9fca-3ad560a9cb19-kube-api-access-z5vjj\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973474 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqpgv\" (UniqueName: \"kubernetes.io/projected/84d8e682-5102-429f-8b52-c8c962ef8ebd-kube-api-access-hqpgv\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973539 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpglf\" (UniqueName: \"kubernetes.io/projected/e5662dd7-e194-4cff-8d36-a51bd442adc9-kube-api-access-cpglf\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973594 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfx26\" (UniqueName: \"kubernetes.io/projected/b1b08308-274c-46b2-a129-568fc7acc250-kube-api-access-gfx26\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973646 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5662dd7-e194-4cff-8d36-a51bd442adc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:19 crc kubenswrapper[4756]: I1124 12:43:19.973702 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84d8e682-5102-429f-8b52-c8c962ef8ebd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.174789 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7c0e-account-create-xsh7k" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.174778 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7c0e-account-create-xsh7k" event={"ID":"de63eecd-9c64-459c-a274-a8bfc7362544","Type":"ContainerDied","Data":"45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.174939 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45ed289aac3b07585820f21b9c8e0e86ac28823f61ba34e470f788ac5957d7fa" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.177133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a2de-account-create-jjkvc" event={"ID":"b1b08308-274c-46b2-a129-568fc7acc250","Type":"ContainerDied","Data":"2f0383029c5295de9ff7bbb73334ba379aaaa9757f6ade121afd85b23f99e270"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.177189 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0383029c5295de9ff7bbb73334ba379aaaa9757f6ade121afd85b23f99e270" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.177190 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a2de-account-create-jjkvc" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.179071 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2kfkz" event={"ID":"84d8e682-5102-429f-8b52-c8c962ef8ebd","Type":"ContainerDied","Data":"1e9d3ae21889666ccb4a5a23f74ea91a3269992fda0b852a96f9f3fd83e22052"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.179104 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9d3ae21889666ccb4a5a23f74ea91a3269992fda0b852a96f9f3fd83e22052" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.179184 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2kfkz" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.183449 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjhrm" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.183493 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjhrm" event={"ID":"e5662dd7-e194-4cff-8d36-a51bd442adc9","Type":"ContainerDied","Data":"4890ac68d156bb59b6e2bf63fffcda6ee6afc0d260bd414f3a79b569e0c392ef"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.183539 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4890ac68d156bb59b6e2bf63fffcda6ee6afc0d260bd414f3a79b569e0c392ef" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.185297 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dpjcl" event={"ID":"b7050330-b07e-4f5a-9fca-3ad560a9cb19","Type":"ContainerDied","Data":"84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.185334 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84da8ba67ad0e832bfb729c2b3d25f742bdaf4c7beb8a762bad4f26baf137489" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.185524 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dpjcl" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.187323 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0b9-account-create-9q7xc" event={"ID":"4c19843e-ef57-4ca4-bc56-992f31cc5a87","Type":"ContainerDied","Data":"8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc"} Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.187579 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5774b3a3627b33e215122b7333212f0de35b8079cdd19b87df1162cf2d2bfc" Nov 24 12:43:20 crc kubenswrapper[4756]: I1124 12:43:20.187383 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0b9-account-create-9q7xc" Nov 24 12:43:22 crc kubenswrapper[4756]: I1124 12:43:22.205933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerStarted","Data":"c5c576c9e27c38861cac54c8eb0edaee0bae680ad40155da103635bcb28aa14c"} Nov 24 12:43:22 crc kubenswrapper[4756]: I1124 12:43:22.231977 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.821721832 podStartE2EDuration="51.231957039s" podCreationTimestamp="2025-11-24 12:42:31 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.401443127 +0000 UTC m=+894.758957269" lastFinishedPulling="2025-11-24 12:43:21.811678334 +0000 UTC m=+934.169192476" observedRunningTime="2025-11-24 12:43:22.22831495 +0000 UTC m=+934.585829112" watchObservedRunningTime="2025-11-24 12:43:22.231957039 +0000 UTC m=+934.589471181" Nov 24 12:43:23 crc kubenswrapper[4756]: I1124 12:43:23.375686 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.071958 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2lk9k" podUID="f9af141a-c02a-4457-b68e-111765a62280" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:43:24 crc kubenswrapper[4756]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:43:24 crc kubenswrapper[4756]: > Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.100135 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.222858 4756 generic.go:334] "Generic (PLEG): container finished" podID="6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" containerID="2fe70ee52b2025b74bf8a4acb2dc0fb8bf610f176cf4b69cbfcd7ab01460a8d4" exitCode=0 Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.222914 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4x5lq" event={"ID":"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07","Type":"ContainerDied","Data":"2fe70ee52b2025b74bf8a4acb2dc0fb8bf610f176cf4b69cbfcd7ab01460a8d4"} Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.224953 4756 generic.go:334] "Generic (PLEG): container finished" podID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerID="c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317" exitCode=0 Nov 24 12:43:24 crc kubenswrapper[4756]: I1124 12:43:24.225012 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerDied","Data":"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317"} Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.235568 4756 generic.go:334] "Generic (PLEG): container finished" podID="12075358-f893-49bc-9ace-dda0ce2865ec" containerID="61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277" exitCode=0 Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.235657 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerDied","Data":"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277"} Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.240129 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerStarted","Data":"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530"} Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.240453 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.325570 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.38098899 podStartE2EDuration="1m0.325552395s" podCreationTimestamp="2025-11-24 12:42:25 +0000 UTC" firstStartedPulling="2025-11-24 12:42:41.862433534 +0000 UTC m=+894.219947676" lastFinishedPulling="2025-11-24 12:42:49.806996939 +0000 UTC m=+902.164511081" observedRunningTime="2025-11-24 12:43:25.32129581 +0000 UTC m=+937.678809962" watchObservedRunningTime="2025-11-24 12:43:25.325552395 +0000 UTC m=+937.683066537" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.366564 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tflqn"] Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.366915 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c19843e-ef57-4ca4-bc56-992f31cc5a87" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.366927 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c19843e-ef57-4ca4-bc56-992f31cc5a87" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.366942 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5662dd7-e194-4cff-8d36-a51bd442adc9" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.366947 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5662dd7-e194-4cff-8d36-a51bd442adc9" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.366965 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="init" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.366972 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="init" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.366980 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de63eecd-9c64-459c-a274-a8bfc7362544" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.366986 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="de63eecd-9c64-459c-a274-a8bfc7362544" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.366997 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b08308-274c-46b2-a129-568fc7acc250" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367003 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b08308-274c-46b2-a129-568fc7acc250" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.367012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d8e682-5102-429f-8b52-c8c962ef8ebd" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367018 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d8e682-5102-429f-8b52-c8c962ef8ebd" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.367034 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7050330-b07e-4f5a-9fca-3ad560a9cb19" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367040 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7050330-b07e-4f5a-9fca-3ad560a9cb19" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: E1124 12:43:25.367051 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="dnsmasq-dns" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367057 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="dnsmasq-dns" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367224 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b08308-274c-46b2-a129-568fc7acc250" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367251 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d8e682-5102-429f-8b52-c8c962ef8ebd" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367269 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="de63eecd-9c64-459c-a274-a8bfc7362544" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367284 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c19843e-ef57-4ca4-bc56-992f31cc5a87" containerName="mariadb-account-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367301 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7050330-b07e-4f5a-9fca-3ad560a9cb19" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367315 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d39949-cd58-4321-af4c-8427b4766e1e" containerName="dnsmasq-dns" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367329 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5662dd7-e194-4cff-8d36-a51bd442adc9" containerName="mariadb-database-create" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.367885 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.372487 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.372633 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qx8l5" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.382026 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tflqn"] Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.479758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jmv\" (UniqueName: \"kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.479916 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.480076 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.480140 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.553455 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.581990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.582055 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.582102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jmv\" (UniqueName: \"kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.582185 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.587087 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.587241 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.596216 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.605418 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jmv\" (UniqueName: \"kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv\") pod \"glance-db-sync-tflqn\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.683516 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.683918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.683987 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.684083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.684112 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlrpg\" (UniqueName: \"kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.684145 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.684188 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle\") pod \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\" (UID: \"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07\") " Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.684920 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.685313 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.687958 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg" (OuterVolumeSpecName: "kube-api-access-rlrpg") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "kube-api-access-rlrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.690073 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.711271 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts" (OuterVolumeSpecName: "scripts") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.712927 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.720523 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" (UID: "6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.753395 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tflqn" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786295 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlrpg\" (UniqueName: \"kubernetes.io/projected/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-kube-api-access-rlrpg\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786364 4756 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786377 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786388 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786396 4756 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786407 4756 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:25 crc kubenswrapper[4756]: I1124 12:43:25.786415 4756 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.248360 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerStarted","Data":"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214"} Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.249693 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.251508 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4x5lq" Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.253270 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4x5lq" event={"ID":"6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07","Type":"ContainerDied","Data":"c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27"} Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.263535 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c190de2a37054f5be4ae5cf36bf10c626d7565f6d529dc057ef378f549468a27" Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.293917 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.572432199 podStartE2EDuration="1m1.293894459s" podCreationTimestamp="2025-11-24 12:42:25 +0000 UTC" firstStartedPulling="2025-11-24 12:42:42.086794173 +0000 UTC m=+894.444308315" lastFinishedPulling="2025-11-24 12:42:49.808256433 +0000 UTC m=+902.165770575" observedRunningTime="2025-11-24 12:43:26.288388929 +0000 UTC m=+938.645903091" watchObservedRunningTime="2025-11-24 12:43:26.293894459 +0000 UTC m=+938.651408611" Nov 24 12:43:26 crc kubenswrapper[4756]: I1124 12:43:26.362097 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tflqn"] Nov 24 12:43:27 crc kubenswrapper[4756]: I1124 12:43:27.259640 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tflqn" event={"ID":"79cae2a4-d229-4c70-b19f-b9016c530697","Type":"ContainerStarted","Data":"a5e66925af4f7363de64d2d3f450a6520e2ce14c2927790d9dbbe9a77e964455"} Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.075140 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2lk9k" podUID="f9af141a-c02a-4457-b68e-111765a62280" containerName="ovn-controller" probeResult="failure" output=< Nov 24 12:43:29 crc kubenswrapper[4756]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 12:43:29 crc kubenswrapper[4756]: > Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.107231 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v5r4t" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.327666 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lk9k-config-ndtqx"] Nov 24 12:43:29 crc kubenswrapper[4756]: E1124 12:43:29.328103 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" containerName="swift-ring-rebalance" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.328129 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" containerName="swift-ring-rebalance" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.328407 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07" containerName="swift-ring-rebalance" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.329133 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.332170 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.336008 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k-config-ndtqx"] Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqs5d\" (UniqueName: \"kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464874 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464897 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.464942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566162 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566218 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566239 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566284 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.566349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqs5d\" (UniqueName: \"kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.567034 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.567100 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.567243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.567361 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.569224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.590449 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqs5d\" (UniqueName: \"kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d\") pod \"ovn-controller-2lk9k-config-ndtqx\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:29 crc kubenswrapper[4756]: I1124 12:43:29.673807 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:30 crc kubenswrapper[4756]: I1124 12:43:30.226098 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k-config-ndtqx"] Nov 24 12:43:30 crc kubenswrapper[4756]: W1124 12:43:30.244073 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod638f6ae9_d661_46d3_b5b0_fba360ece658.slice/crio-fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a WatchSource:0}: Error finding container fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a: Status 404 returned error can't find the container with id fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a Nov 24 12:43:30 crc kubenswrapper[4756]: I1124 12:43:30.320909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-ndtqx" event={"ID":"638f6ae9-d661-46d3-b5b0-fba360ece658","Type":"ContainerStarted","Data":"fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a"} Nov 24 12:43:31 crc kubenswrapper[4756]: I1124 12:43:31.331728 4756 generic.go:334] "Generic (PLEG): container finished" podID="638f6ae9-d661-46d3-b5b0-fba360ece658" containerID="0928bec15dec4905b6f7eb3b0e83190daa0a2d54a6079c1ad6e8d4bcc66c176a" exitCode=0 Nov 24 12:43:31 crc kubenswrapper[4756]: I1124 12:43:31.331825 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-ndtqx" event={"ID":"638f6ae9-d661-46d3-b5b0-fba360ece658","Type":"ContainerDied","Data":"0928bec15dec4905b6f7eb3b0e83190daa0a2d54a6079c1ad6e8d4bcc66c176a"} Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.709428 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.849430 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.849744 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.849930 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850029 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850175 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqs5d\" (UniqueName: \"kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850311 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts\") pod \"638f6ae9-d661-46d3-b5b0-fba360ece658\" (UID: \"638f6ae9-d661-46d3-b5b0-fba360ece658\") " Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850341 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850374 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850386 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run" (OuterVolumeSpecName: "var-run") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.850919 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.851648 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts" (OuterVolumeSpecName: "scripts") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.855326 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d" (OuterVolumeSpecName: "kube-api-access-nqs5d") pod "638f6ae9-d661-46d3-b5b0-fba360ece658" (UID: "638f6ae9-d661-46d3-b5b0-fba360ece658"). InnerVolumeSpecName "kube-api-access-nqs5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951839 4756 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951886 4756 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951895 4756 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951903 4756 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/638f6ae9-d661-46d3-b5b0-fba360ece658-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951912 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqs5d\" (UniqueName: \"kubernetes.io/projected/638f6ae9-d661-46d3-b5b0-fba360ece658-kube-api-access-nqs5d\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:32 crc kubenswrapper[4756]: I1124 12:43:32.951922 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/638f6ae9-d661-46d3-b5b0-fba360ece658-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.354036 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-ndtqx" event={"ID":"638f6ae9-d661-46d3-b5b0-fba360ece658","Type":"ContainerDied","Data":"fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a"} Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.354338 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd69d030655087630e7949377a8a50d768c329ce235f3279ed6e23d9248c94a" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.354284 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-ndtqx" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.375501 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.378291 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.479245 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.479343 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.823722 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2lk9k-config-ndtqx"] Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.834960 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2lk9k-config-ndtqx"] Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.928925 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lk9k-config-tp4dq"] Nov 24 12:43:33 crc kubenswrapper[4756]: E1124 12:43:33.929382 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638f6ae9-d661-46d3-b5b0-fba360ece658" containerName="ovn-config" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.929405 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="638f6ae9-d661-46d3-b5b0-fba360ece658" containerName="ovn-config" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.929663 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="638f6ae9-d661-46d3-b5b0-fba360ece658" containerName="ovn-config" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.930407 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.934674 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 12:43:33 crc kubenswrapper[4756]: I1124 12:43:33.942608 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k-config-tp4dq"] Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.068870 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2lk9k" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.072354 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.072403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5mg2\" (UniqueName: \"kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.072766 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.072863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.072921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.073013 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.174223 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.174306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.174341 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.174364 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mg2\" (UniqueName: \"kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.174895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.175097 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.175209 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.175281 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.175295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.175382 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.176608 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.194795 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mg2\" (UniqueName: \"kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2\") pod \"ovn-controller-2lk9k-config-tp4dq\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.248543 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.376135 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.496476 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638f6ae9-d661-46d3-b5b0-fba360ece658" path="/var/lib/kubelet/pods/638f6ae9-d661-46d3-b5b0-fba360ece658/volumes" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.706917 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lk9k-config-tp4dq"] Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.988855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:34 crc kubenswrapper[4756]: I1124 12:43:34.997239 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9cf650c1-2692-4b3d-89c5-5e3e0178e213-etc-swift\") pod \"swift-storage-0\" (UID: \"9cf650c1-2692-4b3d-89c5-5e3e0178e213\") " pod="openstack/swift-storage-0" Nov 24 12:43:35 crc kubenswrapper[4756]: I1124 12:43:35.226334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 12:43:35 crc kubenswrapper[4756]: I1124 12:43:35.382094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-tp4dq" event={"ID":"e4a72418-0177-43c2-874e-22e9908be713","Type":"ContainerStarted","Data":"afea2df627dbe9d7eeb3d71bb690abd60205741dc0cf600a9a61bed53c678644"} Nov 24 12:43:35 crc kubenswrapper[4756]: I1124 12:43:35.607783 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 12:43:35 crc kubenswrapper[4756]: W1124 12:43:35.617061 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf650c1_2692_4b3d_89c5_5e3e0178e213.slice/crio-67056e5157c4a22269f1dad404847f8f7e495e57039b3d4ef057b501388e7e1e WatchSource:0}: Error finding container 67056e5157c4a22269f1dad404847f8f7e495e57039b3d4ef057b501388e7e1e: Status 404 returned error can't find the container with id 67056e5157c4a22269f1dad404847f8f7e495e57039b3d4ef057b501388e7e1e Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.393098 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"67056e5157c4a22269f1dad404847f8f7e495e57039b3d4ef057b501388e7e1e"} Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.396128 4756 generic.go:334] "Generic (PLEG): container finished" podID="e4a72418-0177-43c2-874e-22e9908be713" containerID="56bed93aed9f1455e0b712cc97d3365469269eedfd0f4167064fd6731f02e557" exitCode=0 Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.396170 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-tp4dq" event={"ID":"e4a72418-0177-43c2-874e-22e9908be713","Type":"ContainerDied","Data":"56bed93aed9f1455e0b712cc97d3365469269eedfd0f4167064fd6731f02e557"} Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.592753 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.889451 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-fm4f8"] Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.890919 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.894969 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-sdcnv" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.895485 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.899995 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-fm4f8"] Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.907318 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.957388 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6x89j"] Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.978579 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.991421 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8016-account-create-4ltbv"] Nov 24 12:43:36 crc kubenswrapper[4756]: I1124 12:43:36.995385 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.009935 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6x89j"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.024727 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032273 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsb5\" (UniqueName: \"kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032454 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032478 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.032557 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfz5\" (UniqueName: \"kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.114428 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8016-account-create-4ltbv"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133736 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgsb5\" (UniqueName: \"kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133857 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684hg\" (UniqueName: \"kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133878 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133896 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133945 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.133971 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfz5\" (UniqueName: \"kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.134848 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.141089 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g4t66"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.142208 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.149773 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.152964 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.164855 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g4t66"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.189786 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.206949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgsb5\" (UniqueName: \"kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5\") pod \"barbican-db-create-6x89j\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.213127 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfz5\" (UniqueName: \"kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5\") pod \"watcher-db-sync-fm4f8\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.214549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.237735 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.237878 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684hg\" (UniqueName: \"kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.237907 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.237953 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfc2\" (UniqueName: \"kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.239275 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.282411 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-66cf-account-create-gl79r"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.283626 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.287425 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.287880 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684hg\" (UniqueName: \"kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg\") pod \"barbican-8016-account-create-4ltbv\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.301326 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-66cf-account-create-gl79r"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.318516 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.322943 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.341139 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.341214 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qs28\" (UniqueName: \"kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.341249 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.341311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfc2\" (UniqueName: \"kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.342811 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.405731 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfc2\" (UniqueName: \"kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2\") pod \"cinder-db-create-g4t66\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.412927 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.413215 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" containerID="cri-o://cb991172ccc18a386f0a25025a7e6959f41f8e598efadcf34da8f1091727d15b" gracePeriod=600 Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.413626 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="thanos-sidecar" containerID="cri-o://c5c576c9e27c38861cac54c8eb0edaee0bae680ad40155da103635bcb28aa14c" gracePeriod=600 Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.413667 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="config-reloader" containerID="cri-o://66ea414b6b36b7923af44699931c246763f83afe867a1b52f8bdfc171dbedc3c" gracePeriod=600 Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.442603 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qs28\" (UniqueName: \"kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.442669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.443410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.489472 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qs28\" (UniqueName: \"kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28\") pod \"cinder-66cf-account-create-gl79r\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.490486 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6ppv6"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.491622 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.534208 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6ppv6"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.545342 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.545418 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxm48\" (UniqueName: \"kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.634677 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4t66" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.647196 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.647268 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxm48\" (UniqueName: \"kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.648291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.661751 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x592j"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.662894 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.667641 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.667891 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.668006 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.668568 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.675017 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2m2wq" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.684918 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxm48\" (UniqueName: \"kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48\") pod \"neutron-db-create-6ppv6\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.699838 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-22a1-account-create-w8fss"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.710022 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.712444 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.723031 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x592j"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.729829 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-22a1-account-create-w8fss"] Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.830613 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.859337 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbc7\" (UniqueName: \"kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.859464 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.859493 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.859537 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcgm\" (UniqueName: \"kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.859560 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.961529 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbc7\" (UniqueName: \"kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.961637 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.962401 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.962587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.962734 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcgm\" (UniqueName: \"kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.962845 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.965574 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.966393 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.981970 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcgm\" (UniqueName: \"kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm\") pod \"neutron-22a1-account-create-w8fss\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.982774 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbc7\" (UniqueName: \"kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7\") pod \"keystone-db-sync-x592j\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:37 crc kubenswrapper[4756]: I1124 12:43:37.986068 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x592j" Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.035944 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.376580 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441284 4756 generic.go:334] "Generic (PLEG): container finished" podID="a974f608-51c8-4650-be4a-fad42e19bd48" containerID="c5c576c9e27c38861cac54c8eb0edaee0bae680ad40155da103635bcb28aa14c" exitCode=0 Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441568 4756 generic.go:334] "Generic (PLEG): container finished" podID="a974f608-51c8-4650-be4a-fad42e19bd48" containerID="66ea414b6b36b7923af44699931c246763f83afe867a1b52f8bdfc171dbedc3c" exitCode=0 Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441578 4756 generic.go:334] "Generic (PLEG): container finished" podID="a974f608-51c8-4650-be4a-fad42e19bd48" containerID="cb991172ccc18a386f0a25025a7e6959f41f8e598efadcf34da8f1091727d15b" exitCode=0 Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441600 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerDied","Data":"c5c576c9e27c38861cac54c8eb0edaee0bae680ad40155da103635bcb28aa14c"} Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441625 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerDied","Data":"66ea414b6b36b7923af44699931c246763f83afe867a1b52f8bdfc171dbedc3c"} Nov 24 12:43:38 crc kubenswrapper[4756]: I1124 12:43:38.441634 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerDied","Data":"cb991172ccc18a386f0a25025a7e6959f41f8e598efadcf34da8f1091727d15b"} Nov 24 12:43:38 crc kubenswrapper[4756]: E1124 12:43:38.791715 4756 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.200:41066->38.102.83.200:44291: write tcp 38.102.83.200:41066->38.102.83.200:44291: write: broken pipe Nov 24 12:43:43 crc kubenswrapper[4756]: I1124 12:43:43.376241 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.503136 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lk9k-config-tp4dq" event={"ID":"e4a72418-0177-43c2-874e-22e9908be713","Type":"ContainerDied","Data":"afea2df627dbe9d7eeb3d71bb690abd60205741dc0cf600a9a61bed53c678644"} Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.503236 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afea2df627dbe9d7eeb3d71bb690abd60205741dc0cf600a9a61bed53c678644" Nov 24 12:43:45 crc kubenswrapper[4756]: E1124 12:43:45.519319 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 24 12:43:45 crc kubenswrapper[4756]: E1124 12:43:45.519874 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4jmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-tflqn_openstack(79cae2a4-d229-4c70-b19f-b9016c530697): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:43:45 crc kubenswrapper[4756]: E1124 12:43:45.521226 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-tflqn" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.587056 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.701752 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.701918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5mg2\" (UniqueName: \"kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.701963 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702009 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702084 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702107 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn\") pod \"e4a72418-0177-43c2-874e-22e9908be713\" (UID: \"e4a72418-0177-43c2-874e-22e9908be713\") " Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702203 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run" (OuterVolumeSpecName: "var-run") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702259 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702844 4756 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.702867 4756 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.703500 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.703694 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.703838 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts" (OuterVolumeSpecName: "scripts") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.710797 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2" (OuterVolumeSpecName: "kube-api-access-j5mg2") pod "e4a72418-0177-43c2-874e-22e9908be713" (UID: "e4a72418-0177-43c2-874e-22e9908be713"). InnerVolumeSpecName "kube-api-access-j5mg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.804902 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5mg2\" (UniqueName: \"kubernetes.io/projected/e4a72418-0177-43c2-874e-22e9908be713-kube-api-access-j5mg2\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.805333 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.805349 4756 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e4a72418-0177-43c2-874e-22e9908be713-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.805362 4756 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e4a72418-0177-43c2-874e-22e9908be713-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:45 crc kubenswrapper[4756]: I1124 12:43:45.807667 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.008702 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.009603 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67n62\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.009637 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.009971 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.009993 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.010110 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.010221 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.010267 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config\") pod \"a974f608-51c8-4650-be4a-fad42e19bd48\" (UID: \"a974f608-51c8-4650-be4a-fad42e19bd48\") " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.014841 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.016243 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.027803 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config" (OuterVolumeSpecName: "config") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.027860 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62" (OuterVolumeSpecName: "kube-api-access-67n62") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "kube-api-access-67n62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.040780 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out" (OuterVolumeSpecName: "config-out") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.048589 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.069205 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126696 4756 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a974f608-51c8-4650-be4a-fad42e19bd48-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126747 4756 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126763 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67n62\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-kube-api-access-67n62\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126780 4756 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a974f608-51c8-4650-be4a-fad42e19bd48-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126797 4756 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a974f608-51c8-4650-be4a-fad42e19bd48-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126811 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.126849 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") on node \"crc\" " Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.182031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config" (OuterVolumeSpecName: "web-config") pod "a974f608-51c8-4650-be4a-fad42e19bd48" (UID: "a974f608-51c8-4650-be4a-fad42e19bd48"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.196702 4756 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.196853 4756 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f") on node "crc" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.228758 4756 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a974f608-51c8-4650-be4a-fad42e19bd48-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.228863 4756 reconciler_common.go:293] "Volume detached for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.513121 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lk9k-config-tp4dq" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.518354 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a974f608-51c8-4650-be4a-fad42e19bd48","Type":"ContainerDied","Data":"04dcc32c34cc748f808423958747c63e5b39cbce28794a16b2c3bbee82722082"} Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.518441 4756 scope.go:117] "RemoveContainer" containerID="c5c576c9e27c38861cac54c8eb0edaee0bae680ad40155da103635bcb28aa14c" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.519059 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.520364 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-tflqn" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.602877 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.620365 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.634401 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x592j"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.650508 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6x89j"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.669221 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-22a1-account-create-w8fss"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.675628 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g4t66"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.683436 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.683933 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a72418-0177-43c2-874e-22e9908be713" containerName="ovn-config" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.683959 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a72418-0177-43c2-874e-22e9908be713" containerName="ovn-config" Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.683981 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.683990 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.684003 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="thanos-sidecar" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684011 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="thanos-sidecar" Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.684026 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="config-reloader" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684035 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="config-reloader" Nov 24 12:43:46 crc kubenswrapper[4756]: E1124 12:43:46.684045 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="init-config-reloader" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684054 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="init-config-reloader" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684250 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a72418-0177-43c2-874e-22e9908be713" containerName="ovn-config" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684268 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="prometheus" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684276 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="config-reloader" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.684287 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" containerName="thanos-sidecar" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.686208 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.691200 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.691268 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.691419 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.692737 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.692925 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.693082 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d4qzx" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.710942 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-66cf-account-create-gl79r"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.721712 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.722631 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 12:43:46 crc kubenswrapper[4756]: W1124 12:43:46.729351 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519ea567_18c2_49a6_8e45_ad4bb39ecd90.slice/crio-c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe WatchSource:0}: Error finding container c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe: Status 404 returned error can't find the container with id c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.738348 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6ppv6"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756437 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756498 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756528 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756573 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756597 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756671 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l96q\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756722 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.756745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.758810 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-fm4f8"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.771312 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8016-account-create-4ltbv"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.813385 4756 scope.go:117] "RemoveContainer" containerID="66ea414b6b36b7923af44699931c246763f83afe867a1b52f8bdfc171dbedc3c" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.829238 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2lk9k-config-tp4dq"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.836642 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2lk9k-config-tp4dq"] Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865391 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865437 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865459 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865491 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865531 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865585 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865629 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865644 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l96q\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865671 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.865692 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.871317 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.884565 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.890003 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.890873 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.895951 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.896469 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.897351 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.898044 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.898841 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.918247 4756 scope.go:117] "RemoveContainer" containerID="cb991172ccc18a386f0a25025a7e6959f41f8e598efadcf34da8f1091727d15b" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.924684 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.924730 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac3567aefb4ff022402a71c4c19bba7ed7a13b4fde27606ef830df8410391bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.936294 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l96q\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.992687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:46 crc kubenswrapper[4756]: I1124 12:43:46.997533 4756 scope.go:117] "RemoveContainer" containerID="8b0f3852d7aa52dc4c3486ff8f736b20e5083ed69d982e5959ac4f8b04a47b7b" Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.019026 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.737933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-fm4f8" event={"ID":"73042cf1-c8fa-417b-b688-cfed5a034a8b","Type":"ContainerStarted","Data":"96d7a898db235d4dfa5427b5c60495e2d6ca5c10300438a2c8b56e436d8c36ca"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.743561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66cf-account-create-gl79r" event={"ID":"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046","Type":"ContainerStarted","Data":"219f3950282b4131e2bd9701805dfbab883dacc1f0600d00f12ff84fc8575718"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.748648 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4t66" event={"ID":"32b34a20-5375-4e57-9934-848828dee2bf","Type":"ContainerStarted","Data":"057cb1b7a431d27d4bd4a844909cbd02ffa9869eac5c9b18f35d1ac3abeeb375"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.750468 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8016-account-create-4ltbv" event={"ID":"43b1e6f3-dbff-4e44-9900-80796af14d00","Type":"ContainerStarted","Data":"df57dbd1eb7e52352d19e11f583b31bb4b2b54dca2046be5df70d8a106b27297"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.751666 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22a1-account-create-w8fss" event={"ID":"cba67f4d-09b1-4ef4-b159-c2dad51b1050","Type":"ContainerStarted","Data":"7509bbce8c44192750872131c318090f9ab355cdf0c3945ff3c485fb4dd58823"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.757916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6x89j" event={"ID":"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2","Type":"ContainerStarted","Data":"dba95f1ccbf550933e07a92825af0a40661c7cfa56476a5a9dcd1328b8499fe7"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.758904 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x592j" event={"ID":"519ea567-18c2-49a6-8e45-ad4bb39ecd90","Type":"ContainerStarted","Data":"c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe"} Nov 24 12:43:47 crc kubenswrapper[4756]: I1124 12:43:47.761326 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6ppv6" event={"ID":"2538e4c7-1c70-4919-ba63-e24b6e1fbca0","Type":"ContainerStarted","Data":"02c60a33abb1c4bdbb00f9d0f41af31f7e3876ebd5fd7bcd9a7d3d688de0ffba"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.087069 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.506466 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a974f608-51c8-4650-be4a-fad42e19bd48" path="/var/lib/kubelet/pods/a974f608-51c8-4650-be4a-fad42e19bd48/volumes" Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.508262 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a72418-0177-43c2-874e-22e9908be713" path="/var/lib/kubelet/pods/e4a72418-0177-43c2-874e-22e9908be713/volumes" Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.773874 4756 generic.go:334] "Generic (PLEG): container finished" podID="2538e4c7-1c70-4919-ba63-e24b6e1fbca0" containerID="bb0583a4437edd20709ebf344d2a09284117dbcad506d9e2c9aac78c53243a2f" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.773927 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6ppv6" event={"ID":"2538e4c7-1c70-4919-ba63-e24b6e1fbca0","Type":"ContainerDied","Data":"bb0583a4437edd20709ebf344d2a09284117dbcad506d9e2c9aac78c53243a2f"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.778505 4756 generic.go:334] "Generic (PLEG): container finished" podID="cba67f4d-09b1-4ef4-b159-c2dad51b1050" containerID="72b2a90874c2bc391f3b2f2a17c67902e654ae2248389e401dad34080fd3da4b" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.778593 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22a1-account-create-w8fss" event={"ID":"cba67f4d-09b1-4ef4-b159-c2dad51b1050","Type":"ContainerDied","Data":"72b2a90874c2bc391f3b2f2a17c67902e654ae2248389e401dad34080fd3da4b"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.781097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerStarted","Data":"f0592957419004be7389cfdc06a9976c8106f1bf021c023b3c274abdb3cf96e2"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.795501 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"a87fcbe64b9036660ea1a93b5a85725c82dfed91e75bccc0eb5081b8711a64d8"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.795574 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"6da775b12eff303c9a6764ca9d317458d8b4fb225f43b62f2c44f663b14120d1"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.804700 4756 generic.go:334] "Generic (PLEG): container finished" podID="b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" containerID="9e54559d6a49c7647e7ac69894872620eb183ff6fef4f7b6c30c56dbec1d8446" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.804810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6x89j" event={"ID":"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2","Type":"ContainerDied","Data":"9e54559d6a49c7647e7ac69894872620eb183ff6fef4f7b6c30c56dbec1d8446"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.820785 4756 generic.go:334] "Generic (PLEG): container finished" podID="7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" containerID="7dfee5e677531be7f1546bf1d78731a8920e8d16e0013bbd784cb115e33ccc9e" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.820908 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66cf-account-create-gl79r" event={"ID":"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046","Type":"ContainerDied","Data":"7dfee5e677531be7f1546bf1d78731a8920e8d16e0013bbd784cb115e33ccc9e"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.833837 4756 generic.go:334] "Generic (PLEG): container finished" podID="32b34a20-5375-4e57-9934-848828dee2bf" containerID="c3a5881c0d93497bf985fca293eeb5fe4d235f05cfec96c9b11c0396ab0ef0b7" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.833967 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4t66" event={"ID":"32b34a20-5375-4e57-9934-848828dee2bf","Type":"ContainerDied","Data":"c3a5881c0d93497bf985fca293eeb5fe4d235f05cfec96c9b11c0396ab0ef0b7"} Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.840265 4756 generic.go:334] "Generic (PLEG): container finished" podID="43b1e6f3-dbff-4e44-9900-80796af14d00" containerID="8b316be604a858d8e18fa501fca5eed0ec24696060c128b40004bbbd255ee134" exitCode=0 Nov 24 12:43:48 crc kubenswrapper[4756]: I1124 12:43:48.840329 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8016-account-create-4ltbv" event={"ID":"43b1e6f3-dbff-4e44-9900-80796af14d00","Type":"ContainerDied","Data":"8b316be604a858d8e18fa501fca5eed0ec24696060c128b40004bbbd255ee134"} Nov 24 12:43:49 crc kubenswrapper[4756]: I1124 12:43:49.864399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"1f340517c73d053f17a1ef7cc074da838341c2223aa64761bf478db0b26c86c5"} Nov 24 12:43:51 crc kubenswrapper[4756]: I1124 12:43:51.887896 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerStarted","Data":"81ec4e8709b8ea5fd4467ff62ade07a60d36fa89d4b5492bd8ae9ce6f63dfe45"} Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.011867 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.035582 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.053280 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6ppv6" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.079747 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts\") pod \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.079833 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgsb5\" (UniqueName: \"kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5\") pod \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.079906 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxm48\" (UniqueName: \"kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48\") pod \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.079981 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts\") pod \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\" (UID: \"2538e4c7-1c70-4919-ba63-e24b6e1fbca0\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.080014 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcgm\" (UniqueName: \"kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm\") pod \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\" (UID: \"cba67f4d-09b1-4ef4-b159-c2dad51b1050\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.080086 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts\") pod \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\" (UID: \"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2\") " Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.081102 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" (UID: "b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.081484 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cba67f4d-09b1-4ef4-b159-c2dad51b1050" (UID: "cba67f4d-09b1-4ef4-b159-c2dad51b1050"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.081889 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2538e4c7-1c70-4919-ba63-e24b6e1fbca0" (UID: "2538e4c7-1c70-4919-ba63-e24b6e1fbca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.091244 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48" (OuterVolumeSpecName: "kube-api-access-rxm48") pod "2538e4c7-1c70-4919-ba63-e24b6e1fbca0" (UID: "2538e4c7-1c70-4919-ba63-e24b6e1fbca0"). InnerVolumeSpecName "kube-api-access-rxm48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.091358 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5" (OuterVolumeSpecName: "kube-api-access-pgsb5") pod "b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" (UID: "b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2"). InnerVolumeSpecName "kube-api-access-pgsb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.096321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm" (OuterVolumeSpecName: "kube-api-access-dfcgm") pod "cba67f4d-09b1-4ef4-b159-c2dad51b1050" (UID: "cba67f4d-09b1-4ef4-b159-c2dad51b1050"). InnerVolumeSpecName "kube-api-access-dfcgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182331 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgsb5\" (UniqueName: \"kubernetes.io/projected/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-kube-api-access-pgsb5\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182357 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxm48\" (UniqueName: \"kubernetes.io/projected/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-kube-api-access-rxm48\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182366 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2538e4c7-1c70-4919-ba63-e24b6e1fbca0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182375 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcgm\" (UniqueName: \"kubernetes.io/projected/cba67f4d-09b1-4ef4-b159-c2dad51b1050-kube-api-access-dfcgm\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182383 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.182391 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cba67f4d-09b1-4ef4-b159-c2dad51b1050-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.909818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-22a1-account-create-w8fss" event={"ID":"cba67f4d-09b1-4ef4-b159-c2dad51b1050","Type":"ContainerDied","Data":"7509bbce8c44192750872131c318090f9ab355cdf0c3945ff3c485fb4dd58823"} Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.909879 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7509bbce8c44192750872131c318090f9ab355cdf0c3945ff3c485fb4dd58823" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.909996 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-22a1-account-create-w8fss" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.920991 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6x89j" event={"ID":"b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2","Type":"ContainerDied","Data":"dba95f1ccbf550933e07a92825af0a40661c7cfa56476a5a9dcd1328b8499fe7"} Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.921040 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dba95f1ccbf550933e07a92825af0a40661c7cfa56476a5a9dcd1328b8499fe7" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.921046 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6x89j" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.924815 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6ppv6" event={"ID":"2538e4c7-1c70-4919-ba63-e24b6e1fbca0","Type":"ContainerDied","Data":"02c60a33abb1c4bdbb00f9d0f41af31f7e3876ebd5fd7bcd9a7d3d688de0ffba"} Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.924844 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c60a33abb1c4bdbb00f9d0f41af31f7e3876ebd5fd7bcd9a7d3d688de0ffba" Nov 24 12:43:53 crc kubenswrapper[4756]: I1124 12:43:53.924901 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6ppv6" Nov 24 12:44:01 crc kubenswrapper[4756]: I1124 12:44:01.013820 4756 generic.go:334] "Generic (PLEG): container finished" podID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerID="81ec4e8709b8ea5fd4467ff62ade07a60d36fa89d4b5492bd8ae9ce6f63dfe45" exitCode=0 Nov 24 12:44:01 crc kubenswrapper[4756]: I1124 12:44:01.013906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerDied","Data":"81ec4e8709b8ea5fd4467ff62ade07a60d36fa89d4b5492bd8ae9ce6f63dfe45"} Nov 24 12:44:03 crc kubenswrapper[4756]: I1124 12:44:03.478922 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:44:03 crc kubenswrapper[4756]: I1124 12:44:03.479481 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:44:07 crc kubenswrapper[4756]: E1124 12:44:07.977684 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 24 12:44:07 crc kubenswrapper[4756]: E1124 12:44:07.978884 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4vbc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-x592j_openstack(519ea567-18c2-49a6-8e45-ad4bb39ecd90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:07 crc kubenswrapper[4756]: E1124 12:44:07.980275 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-x592j" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.035389 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.042518 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4t66" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.052036 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.108475 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-66cf-account-create-gl79r" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.108465 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-66cf-account-create-gl79r" event={"ID":"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046","Type":"ContainerDied","Data":"219f3950282b4131e2bd9701805dfbab883dacc1f0600d00f12ff84fc8575718"} Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.108548 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="219f3950282b4131e2bd9701805dfbab883dacc1f0600d00f12ff84fc8575718" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.110253 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g4t66" event={"ID":"32b34a20-5375-4e57-9934-848828dee2bf","Type":"ContainerDied","Data":"057cb1b7a431d27d4bd4a844909cbd02ffa9869eac5c9b18f35d1ac3abeeb375"} Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.110298 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057cb1b7a431d27d4bd4a844909cbd02ffa9869eac5c9b18f35d1ac3abeeb375" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.110353 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g4t66" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.113064 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8016-account-create-4ltbv" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.113063 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8016-account-create-4ltbv" event={"ID":"43b1e6f3-dbff-4e44-9900-80796af14d00","Type":"ContainerDied","Data":"df57dbd1eb7e52352d19e11f583b31bb4b2b54dca2046be5df70d8a106b27297"} Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.113115 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df57dbd1eb7e52352d19e11f583b31bb4b2b54dca2046be5df70d8a106b27297" Nov 24 12:44:08 crc kubenswrapper[4756]: E1124 12:44:08.114323 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-x592j" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.183820 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qs28\" (UniqueName: \"kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28\") pod \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.183952 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts\") pod \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\" (UID: \"7fb7bce2-6bda-4c2b-b661-e4f52c9e2046\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts\") pod \"32b34a20-5375-4e57-9934-848828dee2bf\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184504 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts\") pod \"43b1e6f3-dbff-4e44-9900-80796af14d00\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-684hg\" (UniqueName: \"kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg\") pod \"43b1e6f3-dbff-4e44-9900-80796af14d00\" (UID: \"43b1e6f3-dbff-4e44-9900-80796af14d00\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184594 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgfc2\" (UniqueName: \"kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2\") pod \"32b34a20-5375-4e57-9934-848828dee2bf\" (UID: \"32b34a20-5375-4e57-9934-848828dee2bf\") " Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184621 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" (UID: "7fb7bce2-6bda-4c2b-b661-e4f52c9e2046"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.184979 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32b34a20-5375-4e57-9934-848828dee2bf" (UID: "32b34a20-5375-4e57-9934-848828dee2bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.185079 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b34a20-5375-4e57-9934-848828dee2bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.185249 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.185470 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43b1e6f3-dbff-4e44-9900-80796af14d00" (UID: "43b1e6f3-dbff-4e44-9900-80796af14d00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.190131 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28" (OuterVolumeSpecName: "kube-api-access-9qs28") pod "7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" (UID: "7fb7bce2-6bda-4c2b-b661-e4f52c9e2046"). InnerVolumeSpecName "kube-api-access-9qs28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.190774 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2" (OuterVolumeSpecName: "kube-api-access-jgfc2") pod "32b34a20-5375-4e57-9934-848828dee2bf" (UID: "32b34a20-5375-4e57-9934-848828dee2bf"). InnerVolumeSpecName "kube-api-access-jgfc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.191321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg" (OuterVolumeSpecName: "kube-api-access-684hg") pod "43b1e6f3-dbff-4e44-9900-80796af14d00" (UID: "43b1e6f3-dbff-4e44-9900-80796af14d00"). InnerVolumeSpecName "kube-api-access-684hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.285857 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qs28\" (UniqueName: \"kubernetes.io/projected/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046-kube-api-access-9qs28\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.285895 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b1e6f3-dbff-4e44-9900-80796af14d00-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.285905 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-684hg\" (UniqueName: \"kubernetes.io/projected/43b1e6f3-dbff-4e44-9900-80796af14d00-kube-api-access-684hg\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: I1124 12:44:08.285917 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgfc2\" (UniqueName: \"kubernetes.io/projected/32b34a20-5375-4e57-9934-848828dee2bf-kube-api-access-jgfc2\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:08 crc kubenswrapper[4756]: E1124 12:44:08.736314 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.217:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Nov 24 12:44:08 crc kubenswrapper[4756]: E1124 12:44:08.736761 4756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.217:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Nov 24 12:44:08 crc kubenswrapper[4756]: E1124 12:44:08.736955 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.217:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jlfz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-fm4f8_openstack(73042cf1-c8fa-417b-b688-cfed5a034a8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:08 crc kubenswrapper[4756]: E1124 12:44:08.738241 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-fm4f8" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" Nov 24 12:44:09 crc kubenswrapper[4756]: I1124 12:44:09.149039 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerStarted","Data":"ab47ea840acc948a00c4aa917a71f2baab9d0baf5519e86c513395c7b9601068"} Nov 24 12:44:09 crc kubenswrapper[4756]: I1124 12:44:09.153911 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"d0e83bd78874f14d7f1482616ad71156ce3e8289aa86b265d718eec9ea3d37d6"} Nov 24 12:44:09 crc kubenswrapper[4756]: E1124 12:44:09.155371 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.217:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-fm4f8" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" Nov 24 12:44:10 crc kubenswrapper[4756]: I1124 12:44:10.165052 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tflqn" event={"ID":"79cae2a4-d229-4c70-b19f-b9016c530697","Type":"ContainerStarted","Data":"881bcbf2234f7f14037c37522ccfd4e162d69ae2578f095d3d56d6560f1b05e0"} Nov 24 12:44:10 crc kubenswrapper[4756]: I1124 12:44:10.184061 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tflqn" podStartSLOduration=2.602695186 podStartE2EDuration="45.184045724s" podCreationTimestamp="2025-11-24 12:43:25 +0000 UTC" firstStartedPulling="2025-11-24 12:43:26.370590744 +0000 UTC m=+938.728104886" lastFinishedPulling="2025-11-24 12:44:08.951941282 +0000 UTC m=+981.309455424" observedRunningTime="2025-11-24 12:44:10.17918266 +0000 UTC m=+982.536696802" watchObservedRunningTime="2025-11-24 12:44:10.184045724 +0000 UTC m=+982.541559866" Nov 24 12:44:11 crc kubenswrapper[4756]: I1124 12:44:11.190554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"cd9681778c03be2a5f858f5f743779b01406406c547106e0d45daff02650f0e7"} Nov 24 12:44:11 crc kubenswrapper[4756]: I1124 12:44:11.190805 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"f3a699d1a88aae51affb73b6fccbf3da6ff8ab73836e1bc8901426e77792ae4f"} Nov 24 12:44:12 crc kubenswrapper[4756]: I1124 12:44:12.204593 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerStarted","Data":"3a8884d6cece83da44f47b8603a22592214627015eb425cfe95a9e35c7f61cd2"} Nov 24 12:44:12 crc kubenswrapper[4756]: I1124 12:44:12.204701 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerStarted","Data":"0cec482adb1b0d7c03f6f4fe6406f63b50d6ca8d44e72059efaa5baf7460cf29"} Nov 24 12:44:12 crc kubenswrapper[4756]: I1124 12:44:12.210527 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"642eda080b9b59d0f550384c138633787e1c1bc02f73df012e39b6fa0aca5a98"} Nov 24 12:44:12 crc kubenswrapper[4756]: I1124 12:44:12.210826 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"a7d01b449a7091fe71758d59ff6313c79bac301a82751bc5ac155a2f41730669"} Nov 24 12:44:12 crc kubenswrapper[4756]: I1124 12:44:12.237922 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.237905517 podStartE2EDuration="26.237905517s" podCreationTimestamp="2025-11-24 12:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:12.233690761 +0000 UTC m=+984.591204933" watchObservedRunningTime="2025-11-24 12:44:12.237905517 +0000 UTC m=+984.595419659" Nov 24 12:44:13 crc kubenswrapper[4756]: I1124 12:44:13.327057 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"015f09c16a092d5699c8a6cabf007709be75605cb8fe7d9704f5a39749ac5bab"} Nov 24 12:44:13 crc kubenswrapper[4756]: I1124 12:44:13.327680 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"65631cac95bdeca2dd8c0e5bffa4d3c0d2700915ad1e5fe08927c1bc675f6033"} Nov 24 12:44:14 crc kubenswrapper[4756]: I1124 12:44:14.345248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"9c204dbe5da03e3bf0c2c575da045ff7e5c75d99fa164615d23a05e3e0c9f30d"} Nov 24 12:44:14 crc kubenswrapper[4756]: I1124 12:44:14.345662 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"007f5af093df6d14f542db3dad5a029487e2da773152424fd1dd89af95bb203f"} Nov 24 12:44:14 crc kubenswrapper[4756]: I1124 12:44:14.345679 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"e9bcd7d19fd8e0fc8b7e6f0f3ebff84d3877f91c846607ea21383f7826a4614a"} Nov 24 12:44:14 crc kubenswrapper[4756]: I1124 12:44:14.345690 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"4bacbab9689144471d910ba5059778f829fb3ba6c72cbfd314bd0db7d81f1f60"} Nov 24 12:44:14 crc kubenswrapper[4756]: I1124 12:44:14.345701 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9cf650c1-2692-4b3d-89c5-5e3e0178e213","Type":"ContainerStarted","Data":"6773e0c6e5ebc9c91420f3c0e3abf8f74dbc53bb883a09fd7bf45fcd64bf6eb1"} Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.387566 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.336758836 podStartE2EDuration="1m14.387547168s" podCreationTimestamp="2025-11-24 12:43:01 +0000 UTC" firstStartedPulling="2025-11-24 12:43:35.620812261 +0000 UTC m=+947.978326403" lastFinishedPulling="2025-11-24 12:44:12.671600593 +0000 UTC m=+985.029114735" observedRunningTime="2025-11-24 12:44:15.386926471 +0000 UTC m=+987.744440623" watchObservedRunningTime="2025-11-24 12:44:15.387547168 +0000 UTC m=+987.745061310" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.676351 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677081 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677102 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677117 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677126 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677145 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b34a20-5375-4e57-9934-848828dee2bf" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677153 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b34a20-5375-4e57-9934-848828dee2bf" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677188 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b1e6f3-dbff-4e44-9900-80796af14d00" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677196 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b1e6f3-dbff-4e44-9900-80796af14d00" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677215 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba67f4d-09b1-4ef4-b159-c2dad51b1050" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677222 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba67f4d-09b1-4ef4-b159-c2dad51b1050" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: E1124 12:44:15.677241 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2538e4c7-1c70-4919-ba63-e24b6e1fbca0" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677248 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2538e4c7-1c70-4919-ba63-e24b6e1fbca0" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677443 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677473 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2538e4c7-1c70-4919-ba63-e24b6e1fbca0" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677484 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b34a20-5375-4e57-9934-848828dee2bf" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677497 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba67f4d-09b1-4ef4-b159-c2dad51b1050" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677513 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b1e6f3-dbff-4e44-9900-80796af14d00" containerName="mariadb-account-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.677534 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" containerName="mariadb-database-create" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.679451 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.682134 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.693482 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.736846 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.736953 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4k6c\" (UniqueName: \"kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.737029 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.737376 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.737570 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.737652 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839168 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4k6c\" (UniqueName: \"kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839256 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839310 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839346 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.839395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.840566 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.840640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.840700 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.840884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.841311 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:15 crc kubenswrapper[4756]: I1124 12:44:15.863590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4k6c\" (UniqueName: \"kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c\") pod \"dnsmasq-dns-5c79d794d7-9sb8l\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:16 crc kubenswrapper[4756]: I1124 12:44:16.003068 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:16 crc kubenswrapper[4756]: I1124 12:44:16.492095 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:16 crc kubenswrapper[4756]: W1124 12:44:16.493312 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3b9c760_290e_4c72_89e8_a5bab9cdbdb4.slice/crio-59c1b557bf582b268a047bec55ef1939affa2623c3b5bb15cdb7bb4591e37812 WatchSource:0}: Error finding container 59c1b557bf582b268a047bec55ef1939affa2623c3b5bb15cdb7bb4591e37812: Status 404 returned error can't find the container with id 59c1b557bf582b268a047bec55ef1939affa2623c3b5bb15cdb7bb4591e37812 Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.019687 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.020062 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.028926 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.383867 4756 generic.go:334] "Generic (PLEG): container finished" podID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerID="c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6" exitCode=0 Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.384112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" event={"ID":"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4","Type":"ContainerDied","Data":"c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6"} Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.384472 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" event={"ID":"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4","Type":"ContainerStarted","Data":"59c1b557bf582b268a047bec55ef1939affa2623c3b5bb15cdb7bb4591e37812"} Nov 24 12:44:17 crc kubenswrapper[4756]: I1124 12:44:17.393370 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 12:44:18 crc kubenswrapper[4756]: I1124 12:44:18.399471 4756 generic.go:334] "Generic (PLEG): container finished" podID="79cae2a4-d229-4c70-b19f-b9016c530697" containerID="881bcbf2234f7f14037c37522ccfd4e162d69ae2578f095d3d56d6560f1b05e0" exitCode=0 Nov 24 12:44:18 crc kubenswrapper[4756]: I1124 12:44:18.399582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tflqn" event={"ID":"79cae2a4-d229-4c70-b19f-b9016c530697","Type":"ContainerDied","Data":"881bcbf2234f7f14037c37522ccfd4e162d69ae2578f095d3d56d6560f1b05e0"} Nov 24 12:44:18 crc kubenswrapper[4756]: I1124 12:44:18.402803 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" event={"ID":"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4","Type":"ContainerStarted","Data":"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d"} Nov 24 12:44:18 crc kubenswrapper[4756]: I1124 12:44:18.446369 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" podStartSLOduration=3.446332718 podStartE2EDuration="3.446332718s" podCreationTimestamp="2025-11-24 12:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:18.445051153 +0000 UTC m=+990.802565305" watchObservedRunningTime="2025-11-24 12:44:18.446332718 +0000 UTC m=+990.803846900" Nov 24 12:44:19 crc kubenswrapper[4756]: I1124 12:44:19.420622 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.357646 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tflqn" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.435281 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data\") pod \"79cae2a4-d229-4c70-b19f-b9016c530697\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.436797 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tflqn" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.436866 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tflqn" event={"ID":"79cae2a4-d229-4c70-b19f-b9016c530697","Type":"ContainerDied","Data":"a5e66925af4f7363de64d2d3f450a6520e2ce14c2927790d9dbbe9a77e964455"} Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.437256 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e66925af4f7363de64d2d3f450a6520e2ce14c2927790d9dbbe9a77e964455" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.438333 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle\") pod \"79cae2a4-d229-4c70-b19f-b9016c530697\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.438919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jmv\" (UniqueName: \"kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv\") pod \"79cae2a4-d229-4c70-b19f-b9016c530697\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.439011 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data\") pod \"79cae2a4-d229-4c70-b19f-b9016c530697\" (UID: \"79cae2a4-d229-4c70-b19f-b9016c530697\") " Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.441045 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x592j" event={"ID":"519ea567-18c2-49a6-8e45-ad4bb39ecd90","Type":"ContainerStarted","Data":"21bccc702df2c98b4b2375138a4db6a15211f89153dd8eba4710d3d800d3f556"} Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.455129 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79cae2a4-d229-4c70-b19f-b9016c530697" (UID: "79cae2a4-d229-4c70-b19f-b9016c530697"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.460561 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv" (OuterVolumeSpecName: "kube-api-access-p4jmv") pod "79cae2a4-d229-4c70-b19f-b9016c530697" (UID: "79cae2a4-d229-4c70-b19f-b9016c530697"). InnerVolumeSpecName "kube-api-access-p4jmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.471208 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x592j" podStartSLOduration=10.305317465 podStartE2EDuration="43.471180901s" podCreationTimestamp="2025-11-24 12:43:37 +0000 UTC" firstStartedPulling="2025-11-24 12:43:46.814385228 +0000 UTC m=+959.171899370" lastFinishedPulling="2025-11-24 12:44:19.980248654 +0000 UTC m=+992.337762806" observedRunningTime="2025-11-24 12:44:20.464611379 +0000 UTC m=+992.822125521" watchObservedRunningTime="2025-11-24 12:44:20.471180901 +0000 UTC m=+992.828695043" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.482264 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79cae2a4-d229-4c70-b19f-b9016c530697" (UID: "79cae2a4-d229-4c70-b19f-b9016c530697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.523169 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data" (OuterVolumeSpecName: "config-data") pod "79cae2a4-d229-4c70-b19f-b9016c530697" (UID: "79cae2a4-d229-4c70-b19f-b9016c530697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.543692 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4jmv\" (UniqueName: \"kubernetes.io/projected/79cae2a4-d229-4c70-b19f-b9016c530697-kube-api-access-p4jmv\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.543736 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.543746 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.543755 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cae2a4-d229-4c70-b19f-b9016c530697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:20 crc kubenswrapper[4756]: I1124 12:44:20.971512 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.030597 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:21 crc kubenswrapper[4756]: E1124 12:44:21.033012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" containerName="glance-db-sync" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.033037 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" containerName="glance-db-sync" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.035442 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" containerName="glance-db-sync" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.041706 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.104855 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161308 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161464 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161631 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161831 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmqx\" (UniqueName: \"kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.161902 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.263709 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.263840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.264206 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.264283 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.265267 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.265364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.266344 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.266542 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.266638 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmqx\" (UniqueName: \"kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.266668 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.267912 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.288561 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmqx\" (UniqueName: \"kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx\") pod \"dnsmasq-dns-5f59b8f679-mbtfd\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.411189 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.465634 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="dnsmasq-dns" containerID="cri-o://b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d" gracePeriod=10 Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.936305 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:21 crc kubenswrapper[4756]: I1124 12:44:21.941721 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091030 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091072 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091100 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4k6c\" (UniqueName: \"kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091145 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091192 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.091252 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb\") pod \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\" (UID: \"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4\") " Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.103325 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c" (OuterVolumeSpecName: "kube-api-access-l4k6c") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "kube-api-access-l4k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.143387 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.147118 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.151934 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config" (OuterVolumeSpecName: "config") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.153692 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.161635 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" (UID: "f3b9c760-290e-4c72-89e8-a5bab9cdbdb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.195701 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.200925 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.205351 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4k6c\" (UniqueName: \"kubernetes.io/projected/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-kube-api-access-l4k6c\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.205418 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.205439 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.205453 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.487963 4756 generic.go:334] "Generic (PLEG): container finished" podID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerID="3aacb240438a182bf2be92fab9fbffbd48ac76d97835317f7d247465df509a94" exitCode=0 Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.494517 4756 generic.go:334] "Generic (PLEG): container finished" podID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerID="b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d" exitCode=0 Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.494803 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.498310 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" event={"ID":"24ebd976-65cd-4e15-a36f-d53cd9f09904","Type":"ContainerDied","Data":"3aacb240438a182bf2be92fab9fbffbd48ac76d97835317f7d247465df509a94"} Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.498378 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" event={"ID":"24ebd976-65cd-4e15-a36f-d53cd9f09904","Type":"ContainerStarted","Data":"f786289a66d7c8ca6095964fc12318558a383fef382b08813a0bbf999bf8faf5"} Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.498404 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" event={"ID":"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4","Type":"ContainerDied","Data":"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d"} Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.498429 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-9sb8l" event={"ID":"f3b9c760-290e-4c72-89e8-a5bab9cdbdb4","Type":"ContainerDied","Data":"59c1b557bf582b268a047bec55ef1939affa2623c3b5bb15cdb7bb4591e37812"} Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.498460 4756 scope.go:117] "RemoveContainer" containerID="b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.580042 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.582885 4756 scope.go:117] "RemoveContainer" containerID="c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.593771 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-9sb8l"] Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.611342 4756 scope.go:117] "RemoveContainer" containerID="b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d" Nov 24 12:44:22 crc kubenswrapper[4756]: E1124 12:44:22.611928 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d\": container with ID starting with b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d not found: ID does not exist" containerID="b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.611958 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d"} err="failed to get container status \"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d\": rpc error: code = NotFound desc = could not find container \"b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d\": container with ID starting with b021f2885cd09c8d53aed669bbbc64cf9944ae5528167f7c509386703d90084d not found: ID does not exist" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.611980 4756 scope.go:117] "RemoveContainer" containerID="c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6" Nov 24 12:44:22 crc kubenswrapper[4756]: E1124 12:44:22.612340 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6\": container with ID starting with c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6 not found: ID does not exist" containerID="c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6" Nov 24 12:44:22 crc kubenswrapper[4756]: I1124 12:44:22.612367 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6"} err="failed to get container status \"c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6\": rpc error: code = NotFound desc = could not find container \"c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6\": container with ID starting with c40c1d3ab12e2cd667d0649a4272baff09a9976d55b3aedc9f438aff33cad0b6 not found: ID does not exist" Nov 24 12:44:23 crc kubenswrapper[4756]: I1124 12:44:23.517746 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" event={"ID":"24ebd976-65cd-4e15-a36f-d53cd9f09904","Type":"ContainerStarted","Data":"d4c0481feda385b08b35f4d77669728f5416d5f936aca344c2cd4fbd10b4f7c7"} Nov 24 12:44:23 crc kubenswrapper[4756]: I1124 12:44:23.518292 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:23 crc kubenswrapper[4756]: I1124 12:44:23.547514 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" podStartSLOduration=3.547486466 podStartE2EDuration="3.547486466s" podCreationTimestamp="2025-11-24 12:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:23.538866118 +0000 UTC m=+995.896380260" watchObservedRunningTime="2025-11-24 12:44:23.547486466 +0000 UTC m=+995.905000618" Nov 24 12:44:24 crc kubenswrapper[4756]: I1124 12:44:24.486948 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" path="/var/lib/kubelet/pods/f3b9c760-290e-4c72-89e8-a5bab9cdbdb4/volumes" Nov 24 12:44:24 crc kubenswrapper[4756]: I1124 12:44:24.527641 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-fm4f8" event={"ID":"73042cf1-c8fa-417b-b688-cfed5a034a8b","Type":"ContainerStarted","Data":"dd5a77e7e4ce162d2a1a10457c784717db0a296e1201d03b2c00f36db1cfed81"} Nov 24 12:44:24 crc kubenswrapper[4756]: I1124 12:44:24.529013 4756 generic.go:334] "Generic (PLEG): container finished" podID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" containerID="21bccc702df2c98b4b2375138a4db6a15211f89153dd8eba4710d3d800d3f556" exitCode=0 Nov 24 12:44:24 crc kubenswrapper[4756]: I1124 12:44:24.529081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x592j" event={"ID":"519ea567-18c2-49a6-8e45-ad4bb39ecd90","Type":"ContainerDied","Data":"21bccc702df2c98b4b2375138a4db6a15211f89153dd8eba4710d3d800d3f556"} Nov 24 12:44:24 crc kubenswrapper[4756]: I1124 12:44:24.546865 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-fm4f8" podStartSLOduration=11.843894384 podStartE2EDuration="48.546848082s" podCreationTimestamp="2025-11-24 12:43:36 +0000 UTC" firstStartedPulling="2025-11-24 12:43:46.842651666 +0000 UTC m=+959.200165808" lastFinishedPulling="2025-11-24 12:44:23.545605364 +0000 UTC m=+995.903119506" observedRunningTime="2025-11-24 12:44:24.542870142 +0000 UTC m=+996.900384284" watchObservedRunningTime="2025-11-24 12:44:24.546848082 +0000 UTC m=+996.904362234" Nov 24 12:44:25 crc kubenswrapper[4756]: I1124 12:44:25.955844 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x592j" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.092805 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data\") pod \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.092894 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle\") pod \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.092969 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbc7\" (UniqueName: \"kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7\") pod \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\" (UID: \"519ea567-18c2-49a6-8e45-ad4bb39ecd90\") " Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.101153 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7" (OuterVolumeSpecName: "kube-api-access-4vbc7") pod "519ea567-18c2-49a6-8e45-ad4bb39ecd90" (UID: "519ea567-18c2-49a6-8e45-ad4bb39ecd90"). InnerVolumeSpecName "kube-api-access-4vbc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.138132 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "519ea567-18c2-49a6-8e45-ad4bb39ecd90" (UID: "519ea567-18c2-49a6-8e45-ad4bb39ecd90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.167189 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data" (OuterVolumeSpecName: "config-data") pod "519ea567-18c2-49a6-8e45-ad4bb39ecd90" (UID: "519ea567-18c2-49a6-8e45-ad4bb39ecd90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.195652 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.195714 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/519ea567-18c2-49a6-8e45-ad4bb39ecd90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.195738 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbc7\" (UniqueName: \"kubernetes.io/projected/519ea567-18c2-49a6-8e45-ad4bb39ecd90-kube-api-access-4vbc7\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.548216 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x592j" event={"ID":"519ea567-18c2-49a6-8e45-ad4bb39ecd90","Type":"ContainerDied","Data":"c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe"} Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.548272 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b3a32a0069bbe27ab927989d139ceb81173a3e32762b2186b8139702f4c9fe" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.548290 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x592j" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.858007 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b5xjt"] Nov 24 12:44:26 crc kubenswrapper[4756]: E1124 12:44:26.858905 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="dnsmasq-dns" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.858926 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="dnsmasq-dns" Nov 24 12:44:26 crc kubenswrapper[4756]: E1124 12:44:26.858960 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" containerName="keystone-db-sync" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.858972 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" containerName="keystone-db-sync" Nov 24 12:44:26 crc kubenswrapper[4756]: E1124 12:44:26.858987 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="init" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.858994 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="init" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.859219 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b9c760-290e-4c72-89e8-a5bab9cdbdb4" containerName="dnsmasq-dns" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.859235 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" containerName="keystone-db-sync" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.860099 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.867008 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.867452 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="dnsmasq-dns" containerID="cri-o://d4c0481feda385b08b35f4d77669728f5416d5f936aca344c2cd4fbd10b4f7c7" gracePeriod=10 Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.874181 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.874499 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.874606 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.874915 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2m2wq" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.875027 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.894695 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5xjt"] Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.956588 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:26 crc kubenswrapper[4756]: I1124 12:44:26.958552 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.046483 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.056844 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.056958 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057132 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057327 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057395 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057569 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057659 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfwj\" (UniqueName: \"kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057699 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq7cp\" (UniqueName: \"kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.057893 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.058000 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.160506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174638 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174694 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174872 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174952 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfwj\" (UniqueName: \"kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.174983 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq7cp\" (UniqueName: \"kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.167575 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.175375 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.175523 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.175726 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.175796 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.176894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.176976 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.181534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.198081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.198327 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.198766 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.230383 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.270819 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.271282 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.280530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq7cp\" (UniqueName: \"kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp\") pod \"dnsmasq-dns-bbf5cc879-9dzhn\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.298593 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vbnrc"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.316081 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.329271 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.329646 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-glrkw" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.343252 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.359264 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vbnrc"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.360670 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfwj\" (UniqueName: \"kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj\") pod \"keystone-bootstrap-b5xjt\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.422535 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.427660 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.429271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.433723 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.433988 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-48vpv" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.447570 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.448308 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475731 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475774 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475855 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475909 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475949 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.475987 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkbq\" (UniqueName: \"kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.491087 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.495933 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.544238 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wrtzq"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.545396 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.550658 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rv2mk" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.550936 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.551182 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.590913 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkbq\" (UniqueName: \"kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591020 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591077 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591103 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dt4\" (UniqueName: \"kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591204 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591253 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591320 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591421 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591548 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591660 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591817 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591858 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.591898 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.597767 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.607026 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.620384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.627689 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.629639 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.633000 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkbq\" (UniqueName: \"kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq\") pod \"cinder-db-sync-vbnrc\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.638826 4756 generic.go:334] "Generic (PLEG): container finished" podID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerID="d4c0481feda385b08b35f4d77669728f5416d5f936aca344c2cd4fbd10b4f7c7" exitCode=0 Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.638926 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" event={"ID":"24ebd976-65cd-4e15-a36f-d53cd9f09904","Type":"ContainerDied","Data":"d4c0481feda385b08b35f4d77669728f5416d5f936aca344c2cd4fbd10b4f7c7"} Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.653088 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wrtzq"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.680348 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.695480 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wz84p"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.697770 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.702188 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z5hzj" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.702502 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.702637 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9bg6g"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.713074 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.719903 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.720300 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.720460 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzftr" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.730119 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.735534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749197 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749248 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2dt4\" (UniqueName: \"kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749553 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749712 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.749904 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.750185 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.751026 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.756620 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.757871 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wz84p"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.782414 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.782637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2dt4\" (UniqueName: \"kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.784669 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.785228 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config\") pod \"neutron-db-sync-wrtzq\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.793533 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.795595 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.799454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz\") pod \"horizon-84746cc9cc-f9r4d\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.811503 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9bg6g"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.820844 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.824521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.834087 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.842841 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.851580 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.854982 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.866390 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.867473 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.867565 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.867680 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.867944 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kk8k\" (UniqueName: \"kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.868040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.868260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hz4g\" (UniqueName: \"kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.868368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.868454 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.883492 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.891942 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.898925 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.913083 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.927315 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.934204 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.934733 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.934923 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qx8l5" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.935282 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.943528 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.973608 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975324 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975491 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975598 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975711 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975811 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.975927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976059 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976238 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976332 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976464 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzds\" (UniqueName: \"kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976700 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.976833 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ctnz\" (UniqueName: \"kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.983563 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.983682 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kk8k\" (UniqueName: \"kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.983779 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.983928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984054 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984169 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hz4g\" (UniqueName: \"kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984396 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984483 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984573 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984840 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.984933 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.985587 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:27 crc kubenswrapper[4756]: I1124 12:44:27.987943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:27.998283 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:27.999021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.001852 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.007828 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kk8k\" (UniqueName: \"kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k\") pod \"placement-db-sync-9bg6g\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.013208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.024354 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hz4g\" (UniqueName: \"kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g\") pod \"barbican-db-sync-wz84p\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.085424 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087513 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087586 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087659 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087706 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087748 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087770 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087791 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087876 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087899 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087932 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wnn\" (UniqueName: \"kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087958 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzds\" (UniqueName: \"kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.087981 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088012 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ctnz\" (UniqueName: \"kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088034 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088065 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088108 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088136 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.088844 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.089684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.090307 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.091481 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.091528 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.091555 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.091576 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.091632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.092362 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.096996 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.099024 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.104992 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.108009 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.109826 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.115780 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9bg6g" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.115808 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.117676 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.117979 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wz84p" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.131648 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.132437 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.133343 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.136959 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzds\" (UniqueName: \"kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds\") pod \"horizon-797c846fdf-pcdw6\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.151696 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.156778 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ctnz\" (UniqueName: \"kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz\") pod \"dnsmasq-dns-56df8fb6b7-5lxht\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.168767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197564 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wnn\" (UniqueName: \"kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197687 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197726 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197754 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197793 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197856 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197889 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.197931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.208275 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.209967 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.210145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.210390 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.210701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.216636 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.225405 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.227976 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.280348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wnn\" (UniqueName: \"kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.342037 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.348572 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.352932 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.353001 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.358299 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.438783 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.440309 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.557898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558012 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558072 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558103 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558144 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558184 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lv4c\" (UniqueName: \"kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558217 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.558358 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.582293 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.584584 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.661889 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662295 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662344 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmqx\" (UniqueName: \"kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662377 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662415 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662469 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662805 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.662917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.663009 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.663068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.663129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.663172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lv4c\" (UniqueName: \"kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.663224 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.664411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.670678 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.671894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.685458 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.688432 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx" (OuterVolumeSpecName: "kube-api-access-rtmqx") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "kube-api-access-rtmqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.689195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.695729 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.697554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" event={"ID":"24ebd976-65cd-4e15-a36f-d53cd9f09904","Type":"ContainerDied","Data":"f786289a66d7c8ca6095964fc12318558a383fef382b08813a0bbf999bf8faf5"} Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.697602 4756 scope.go:117] "RemoveContainer" containerID="d4c0481feda385b08b35f4d77669728f5416d5f936aca344c2cd4fbd10b4f7c7" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.699865 4756 generic.go:334] "Generic (PLEG): container finished" podID="73042cf1-c8fa-417b-b688-cfed5a034a8b" containerID="dd5a77e7e4ce162d2a1a10457c784717db0a296e1201d03b2c00f36db1cfed81" exitCode=0 Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.699889 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-fm4f8" event={"ID":"73042cf1-c8fa-417b-b688-cfed5a034a8b","Type":"ContainerDied","Data":"dd5a77e7e4ce162d2a1a10457c784717db0a296e1201d03b2c00f36db1cfed81"} Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.702770 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-mbtfd" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.708149 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.721276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lv4c\" (UniqueName: \"kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.754602 4756 scope.go:117] "RemoveContainer" containerID="3aacb240438a182bf2be92fab9fbffbd48ac76d97835317f7d247465df509a94" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.765564 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmqx\" (UniqueName: \"kubernetes.io/projected/24ebd976-65cd-4e15-a36f-d53cd9f09904-kube-api-access-rtmqx\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.849220 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vbnrc"] Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.869293 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config" (OuterVolumeSpecName: "config") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.871847 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") pod \"24ebd976-65cd-4e15-a36f-d53cd9f09904\" (UID: \"24ebd976-65cd-4e15-a36f-d53cd9f09904\") " Nov 24 12:44:28 crc kubenswrapper[4756]: W1124 12:44:28.872561 4756 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/24ebd976-65cd-4e15-a36f-d53cd9f09904/volumes/kubernetes.io~configmap/config Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.872593 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config" (OuterVolumeSpecName: "config") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.874862 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.877957 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.925630 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5xjt"] Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.938140 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.975988 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:28 crc kubenswrapper[4756]: I1124 12:44:28.993905 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.048183 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wrtzq"] Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.062355 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.075185 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.078399 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.079035 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.086589 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24ebd976-65cd-4e15-a36f-d53cd9f09904" (UID: "24ebd976-65cd-4e15-a36f-d53cd9f09904"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:29 crc kubenswrapper[4756]: I1124 12:44:29.182852 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24ebd976-65cd-4e15-a36f-d53cd9f09904-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.291500 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.418809 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.441486 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-mbtfd"] Nov 24 12:44:30 crc kubenswrapper[4756]: W1124 12:44:29.444113 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0286b63b_3a9a_4623_ae9c_7032413a5154.slice/crio-1cc68b32e8fb10711a9d7006cdb3ebd9e61a858a050fb91376b716bb66b30d3d WatchSource:0}: Error finding container 1cc68b32e8fb10711a9d7006cdb3ebd9e61a858a050fb91376b716bb66b30d3d: Status 404 returned error can't find the container with id 1cc68b32e8fb10711a9d7006cdb3ebd9e61a858a050fb91376b716bb66b30d3d Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.452780 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.486857 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.517970 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9bg6g"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.691923 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.746819 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" event={"ID":"df8b15ec-50a4-456f-891e-d5cd3c0572e8","Type":"ContainerStarted","Data":"27413e636e99cf1d9c9e6decb984a18f29b1411ddb10b0b4fd14e1d110bf380f"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.786615 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.803788 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84746cc9cc-f9r4d" event={"ID":"0286b63b-3a9a-4623-ae9c-7032413a5154","Type":"ContainerStarted","Data":"1cc68b32e8fb10711a9d7006cdb3ebd9e61a858a050fb91376b716bb66b30d3d"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.820907 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerStarted","Data":"af3d9542220e4f8fcca1258d972b2e5412e2c6657ff943b906a2e3854f2028b1"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.827219 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.862398 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:44:30 crc kubenswrapper[4756]: E1124 12:44:29.862859 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="init" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.862874 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="init" Nov 24 12:44:30 crc kubenswrapper[4756]: E1124 12:44:29.862890 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="dnsmasq-dns" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.862899 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="dnsmasq-dns" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.906492 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" containerName="dnsmasq-dns" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.908836 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vbnrc" event={"ID":"e5e34263-c415-4300-a110-ab2ad6787566","Type":"ContainerStarted","Data":"0abf8181c54af0fbe088ac540bbd2e1f80fae1e98dab2ca5817bf548b318cd48"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.908988 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:29.982370 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xjt" event={"ID":"781c55f5-0031-40b5-9a12-d5bb53305150","Type":"ContainerStarted","Data":"1e2fc4d66e76c65a6a8a1b3a67f4902dcf131710f6c842efa165b8f17023a2d6"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.010622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wrtzq" event={"ID":"a6f689ad-3620-48b3-ae57-d19148ecb376","Type":"ContainerStarted","Data":"d51bb7f2c738ffecd21ce960c2d6d19c6e2c14623ca0d8928fe5c0e1807bdb84"} Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.061223 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.065962 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.066082 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.066129 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.066245 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh2r\" (UniqueName: \"kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.066269 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: W1124 12:44:30.100060 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee404871_3e83_4fa1_a773_df0c95222c32.slice/crio-dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6 WatchSource:0}: Error finding container dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6: Status 404 returned error can't find the container with id dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6 Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.169852 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh2r\" (UniqueName: \"kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.169905 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.170068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.170302 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.170441 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.171676 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.172486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.173306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.184199 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.210020 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh2r\" (UniqueName: \"kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r\") pod \"horizon-6b954b9689-cp7gf\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.344626 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.536358 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ebd976-65cd-4e15-a36f-d53cd9f09904" path="/var/lib/kubelet/pods/24ebd976-65cd-4e15-a36f-d53cd9f09904/volumes" Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.550788 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wz84p"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.561125 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:44:30 crc kubenswrapper[4756]: I1124 12:44:30.905283 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.066957 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wz84p" event={"ID":"8f485ab9-01fd-4640-833e-8ee586798f2e","Type":"ContainerStarted","Data":"b0f578169c4d766aca6ba7ea3239b021b7736ffb79b71939d92c21adc4702e1a"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.068080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9bg6g" event={"ID":"ee404871-3e83-4fa1-a773-df0c95222c32","Type":"ContainerStarted","Data":"dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.069478 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797c846fdf-pcdw6" event={"ID":"afe9dc1a-7891-46c4-a813-56ae99f0f886","Type":"ContainerStarted","Data":"5a3d4705a5ef5215f589571b657cdc454630920cdfb3898fe9f0fdc5498c3bb4"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.099335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xjt" event={"ID":"781c55f5-0031-40b5-9a12-d5bb53305150","Type":"ContainerStarted","Data":"950682b01db04530c0ac8af34323afc47edae40fe59c7b36a210b702de6b4ccf"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.102849 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wrtzq" event={"ID":"a6f689ad-3620-48b3-ae57-d19148ecb376","Type":"ContainerStarted","Data":"bfc21104f20e44b50196ee6e3e300bc2a3506de3eaa9357e6749792fd13c36c7"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.108022 4756 generic.go:334] "Generic (PLEG): container finished" podID="df8b15ec-50a4-456f-891e-d5cd3c0572e8" containerID="1565c3cdf6ec333fb96288cb5607f4cb6d1dd8982575ef693eab06a7d361ce59" exitCode=0 Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.108058 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" event={"ID":"df8b15ec-50a4-456f-891e-d5cd3c0572e8","Type":"ContainerDied","Data":"1565c3cdf6ec333fb96288cb5607f4cb6d1dd8982575ef693eab06a7d361ce59"} Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.139953 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b5xjt" podStartSLOduration=5.139932206 podStartE2EDuration="5.139932206s" podCreationTimestamp="2025-11-24 12:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:31.130187347 +0000 UTC m=+1003.487701489" watchObservedRunningTime="2025-11-24 12:44:31.139932206 +0000 UTC m=+1003.497446348" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.180008 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.195594 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.251440 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wrtzq" podStartSLOduration=4.251415785 podStartE2EDuration="4.251415785s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:31.193946688 +0000 UTC m=+1003.551460830" watchObservedRunningTime="2025-11-24 12:44:31.251415785 +0000 UTC m=+1003.608929927" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.324423 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlfz5\" (UniqueName: \"kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5\") pod \"73042cf1-c8fa-417b-b688-cfed5a034a8b\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.326400 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle\") pod \"73042cf1-c8fa-417b-b688-cfed5a034a8b\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.326457 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data\") pod \"73042cf1-c8fa-417b-b688-cfed5a034a8b\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.326557 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data\") pod \"73042cf1-c8fa-417b-b688-cfed5a034a8b\" (UID: \"73042cf1-c8fa-417b-b688-cfed5a034a8b\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.330715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5" (OuterVolumeSpecName: "kube-api-access-jlfz5") pod "73042cf1-c8fa-417b-b688-cfed5a034a8b" (UID: "73042cf1-c8fa-417b-b688-cfed5a034a8b"). InnerVolumeSpecName "kube-api-access-jlfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.347657 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "73042cf1-c8fa-417b-b688-cfed5a034a8b" (UID: "73042cf1-c8fa-417b-b688-cfed5a034a8b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.389233 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.389945 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73042cf1-c8fa-417b-b688-cfed5a034a8b" (UID: "73042cf1-c8fa-417b-b688-cfed5a034a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: W1124 12:44:31.392150 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfea9c_8ce7_4c90_a2cd_fa3c7f6dab7f.slice/crio-0a73be05eed8ae401fed120bacd5f7f3c1d1ef49c8e20159dff10e6987ed4606 WatchSource:0}: Error finding container 0a73be05eed8ae401fed120bacd5f7f3c1d1ef49c8e20159dff10e6987ed4606: Status 404 returned error can't find the container with id 0a73be05eed8ae401fed120bacd5f7f3c1d1ef49c8e20159dff10e6987ed4606 Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.424032 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.432044 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlfz5\" (UniqueName: \"kubernetes.io/projected/73042cf1-c8fa-417b-b688-cfed5a034a8b-kube-api-access-jlfz5\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.432092 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.432124 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.436354 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data" (OuterVolumeSpecName: "config-data") pod "73042cf1-c8fa-417b-b688-cfed5a034a8b" (UID: "73042cf1-c8fa-417b-b688-cfed5a034a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.538024 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73042cf1-c8fa-417b-b688-cfed5a034a8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.815996 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.846850 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.847442 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.847473 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq7cp\" (UniqueName: \"kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.847535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.847587 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.847621 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb\") pod \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\" (UID: \"df8b15ec-50a4-456f-891e-d5cd3c0572e8\") " Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.859998 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp" (OuterVolumeSpecName: "kube-api-access-rq7cp") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "kube-api-access-rq7cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.893835 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config" (OuterVolumeSpecName: "config") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.908984 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.918085 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.931612 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.934886 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df8b15ec-50a4-456f-891e-d5cd3c0572e8" (UID: "df8b15ec-50a4-456f-891e-d5cd3c0572e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956193 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956502 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956569 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq7cp\" (UniqueName: \"kubernetes.io/projected/df8b15ec-50a4-456f-891e-d5cd3c0572e8-kube-api-access-rq7cp\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956628 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956682 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:31 crc kubenswrapper[4756]: I1124 12:44:31.956776 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df8b15ec-50a4-456f-891e-d5cd3c0572e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.132465 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-fm4f8" event={"ID":"73042cf1-c8fa-417b-b688-cfed5a034a8b","Type":"ContainerDied","Data":"96d7a898db235d4dfa5427b5c60495e2d6ca5c10300438a2c8b56e436d8c36ca"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.132508 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d7a898db235d4dfa5427b5c60495e2d6ca5c10300438a2c8b56e436d8c36ca" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.132575 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-fm4f8" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.165261 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" event={"ID":"df8b15ec-50a4-456f-891e-d5cd3c0572e8","Type":"ContainerDied","Data":"27413e636e99cf1d9c9e6decb984a18f29b1411ddb10b0b4fd14e1d110bf380f"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.165307 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-9dzhn" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.165321 4756 scope.go:117] "RemoveContainer" containerID="1565c3cdf6ec333fb96288cb5607f4cb6d1dd8982575ef693eab06a7d361ce59" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.178497 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b954b9689-cp7gf" event={"ID":"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5","Type":"ContainerStarted","Data":"1aed797411e1da4d22f44e37c2715d90f2201a02ad8e5ef36d30456498583a4e"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.185980 4756 generic.go:334] "Generic (PLEG): container finished" podID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerID="b93ca14c435470dfd1d7f45a8878d0cb03e05b40c8342541a344e8fe7e849667" exitCode=0 Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.186084 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" event={"ID":"27a19519-b508-46b4-b8e7-87cf03d7c6bd","Type":"ContainerDied","Data":"b93ca14c435470dfd1d7f45a8878d0cb03e05b40c8342541a344e8fe7e849667"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.186134 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" event={"ID":"27a19519-b508-46b4-b8e7-87cf03d7c6bd","Type":"ContainerStarted","Data":"f73cf45b668477439a39f02bab948fdc7f97f01c6545994712e880d845e2a11d"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.214616 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerStarted","Data":"0a73be05eed8ae401fed120bacd5f7f3c1d1ef49c8e20159dff10e6987ed4606"} Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.421416 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: W1124 12:44:32.497141 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719963c5_57e8_4ada_8d76_bf0b7f8c9ebc.slice/crio-c95beb9afb2672dec99bc4ddcd305f4b2ad8bb426aa7965f8848d14efdd3f1ed WatchSource:0}: Error finding container c95beb9afb2672dec99bc4ddcd305f4b2ad8bb426aa7965f8848d14efdd3f1ed: Status 404 returned error can't find the container with id c95beb9afb2672dec99bc4ddcd305f4b2ad8bb426aa7965f8848d14efdd3f1ed Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.538851 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: E1124 12:44:32.539370 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" containerName="watcher-db-sync" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.539433 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" containerName="watcher-db-sync" Nov 24 12:44:32 crc kubenswrapper[4756]: E1124 12:44:32.539486 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8b15ec-50a4-456f-891e-d5cd3c0572e8" containerName="init" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.539549 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8b15ec-50a4-456f-891e-d5cd3c0572e8" containerName="init" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.539787 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8b15ec-50a4-456f-891e-d5cd3c0572e8" containerName="init" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.539852 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" containerName="watcher-db-sync" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.540574 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.547684 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.548102 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-sdcnv" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.579630 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.581312 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.608784 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.612384 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-kube-api-access-5tklr\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.612530 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.612609 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.612745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.612795 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-config-data\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.613457 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-logs\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.613616 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.613711 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjss2\" (UniqueName: \"kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.613739 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.625283 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.667630 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.716788 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.716915 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717040 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-config-data\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-logs\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717308 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjss2\" (UniqueName: \"kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717477 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.717565 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-kube-api-access-5tklr\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.723119 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-logs\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.729780 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.735227 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.740810 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.747188 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-kube-api-access-5tklr\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.748070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d-config-data\") pod \"watcher-applier-0\" (UID: \"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d\") " pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.751244 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.752946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.754933 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-9dzhn"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.761200 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjss2\" (UniqueName: \"kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2\") pod \"watcher-decision-engine-0\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.808277 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.897694 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.899522 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.905354 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.917041 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.927672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.927748 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.927771 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.927802 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vwf\" (UniqueName: \"kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.927887 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.932657 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Nov 24 12:44:32 crc kubenswrapper[4756]: I1124 12:44:32.946578 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.037257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.037379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.037502 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.037571 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.037645 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9vwf\" (UniqueName: \"kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.043769 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.093629 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.106096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.106594 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.107791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9vwf\" (UniqueName: \"kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf\") pod \"watcher-api-0\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.274307 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.320394 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" event={"ID":"27a19519-b508-46b4-b8e7-87cf03d7c6bd","Type":"ContainerStarted","Data":"db7f73c5e8ac9272839f7353c43a2512123a455adc864d82f1d8e18c0b8ca000"} Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.321627 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.348864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerStarted","Data":"c95beb9afb2672dec99bc4ddcd305f4b2ad8bb426aa7965f8848d14efdd3f1ed"} Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.484591 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.484899 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.484957 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.487919 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.487986 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9" gracePeriod=600 Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.640592 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" podStartSLOduration=6.640558966 podStartE2EDuration="6.640558966s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:33.392124726 +0000 UTC m=+1005.749638868" watchObservedRunningTime="2025-11-24 12:44:33.640558966 +0000 UTC m=+1005.998073118" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.690189 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:44:33 crc kubenswrapper[4756]: E1124 12:44:33.936775 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f50ecd_811f_4df2_ae0c_83a787d6cbec.slice/crio-07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f50ecd_811f_4df2_ae0c_83a787d6cbec.slice/crio-conmon-07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:44:33 crc kubenswrapper[4756]: I1124 12:44:33.961044 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Nov 24 12:44:33 crc kubenswrapper[4756]: W1124 12:44:33.989773 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88bd3f9d_e4a1_4cd0_afe2_e03876ec5c2d.slice/crio-0812df18072bececbb93214eff02fed115d5fae24e6abf6319a83f0d515cfe63 WatchSource:0}: Error finding container 0812df18072bececbb93214eff02fed115d5fae24e6abf6319a83f0d515cfe63: Status 404 returned error can't find the container with id 0812df18072bececbb93214eff02fed115d5fae24e6abf6319a83f0d515cfe63 Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.124526 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.485410 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9" exitCode=0 Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.534933 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8b15ec-50a4-456f-891e-d5cd3c0572e8" path="/var/lib/kubelet/pods/df8b15ec-50a4-456f-891e-d5cd3c0572e8/volumes" Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.536739 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9"} Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.536917 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerStarted","Data":"9e8125b8df0063fd22f6af1a3ad89d366cb186d9c17a3c92e5f8890e2e120e3d"} Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.537009 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerStarted","Data":"6b4afb0d8867c53a07e623fc9578d315402290ad9bf770ed451d84bae2b5fba8"} Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.537116 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9988f7-fa01-4411-986d-ac6ba024a7a5","Type":"ContainerStarted","Data":"d615550f2cfef9a71ae95420f9f8de433cf1d807f61105c64342305700f02ed3"} Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.536975 4756 scope.go:117] "RemoveContainer" containerID="0e7ca259f45f1dc780d3934be64aebd04b7e861b4656bbd4bf229c3c5aaf5bbb" Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.541642 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerStarted","Data":"438f37723b29e8d1d50f2336abacbd2db4575e9ec3df901853b6eca5210ea2ad"} Nov 24 12:44:34 crc kubenswrapper[4756]: I1124 12:44:34.549980 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d","Type":"ContainerStarted","Data":"0812df18072bececbb93214eff02fed115d5fae24e6abf6319a83f0d515cfe63"} Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.561061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629"} Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.584318 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerStarted","Data":"7fdd4baadb6f98e25135fb7b3bfe8f63130b57de9beefc19fbfe6320cee2527b"} Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.584367 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerStarted","Data":"9e693877be013cc3325149bb4442619bfda064dcda426f4ea1dd3c73e823a82c"} Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.595094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerStarted","Data":"ad2c8404b45ba392be9a1f43000ceb015285ca88a370649e8d717397ddfcabf2"} Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.595125 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-log" containerID="cri-o://438f37723b29e8d1d50f2336abacbd2db4575e9ec3df901853b6eca5210ea2ad" gracePeriod=30 Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.595194 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-httpd" containerID="cri-o://ad2c8404b45ba392be9a1f43000ceb015285ca88a370649e8d717397ddfcabf2" gracePeriod=30 Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.595270 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.626180 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.626150594 podStartE2EDuration="3.626150594s" podCreationTimestamp="2025-11-24 12:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:35.620612372 +0000 UTC m=+1007.978126504" watchObservedRunningTime="2025-11-24 12:44:35.626150594 +0000 UTC m=+1007.983664726" Nov 24 12:44:35 crc kubenswrapper[4756]: I1124 12:44:35.655179 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.655142535 podStartE2EDuration="8.655142535s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:35.638829155 +0000 UTC m=+1007.996343307" watchObservedRunningTime="2025-11-24 12:44:35.655142535 +0000 UTC m=+1008.012656677" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.430764 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.612788 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.630814 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.641762 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.654861 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662383 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662555 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662594 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662711 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662742 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662894 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.662921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmgv\" (UniqueName: \"kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.663373 4756 generic.go:334] "Generic (PLEG): container finished" podID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerID="ad2c8404b45ba392be9a1f43000ceb015285ca88a370649e8d717397ddfcabf2" exitCode=0 Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.663398 4756 generic.go:334] "Generic (PLEG): container finished" podID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerID="438f37723b29e8d1d50f2336abacbd2db4575e9ec3df901853b6eca5210ea2ad" exitCode=143 Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.663475 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerDied","Data":"ad2c8404b45ba392be9a1f43000ceb015285ca88a370649e8d717397ddfcabf2"} Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.663510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerDied","Data":"438f37723b29e8d1d50f2336abacbd2db4575e9ec3df901853b6eca5210ea2ad"} Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.681091 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.695456 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-585c6478b8-gsbzg"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.700213 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.709375 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerStarted","Data":"e8f2095c983fb2dfd2768a1215350a662c3fb0e054aa798b8690755b7d7df3cb"} Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.709672 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-log" containerID="cri-o://6b4afb0d8867c53a07e623fc9578d315402290ad9bf770ed451d84bae2b5fba8" gracePeriod=30 Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.709871 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-httpd" containerID="cri-o://e8f2095c983fb2dfd2768a1215350a662c3fb0e054aa798b8690755b7d7df3cb" gracePeriod=30 Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.730908 4756 generic.go:334] "Generic (PLEG): container finished" podID="781c55f5-0031-40b5-9a12-d5bb53305150" containerID="950682b01db04530c0ac8af34323afc47edae40fe59c7b36a210b702de6b4ccf" exitCode=0 Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.731831 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xjt" event={"ID":"781c55f5-0031-40b5-9a12-d5bb53305150","Type":"ContainerDied","Data":"950682b01db04530c0ac8af34323afc47edae40fe59c7b36a210b702de6b4ccf"} Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.739445 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-585c6478b8-gsbzg"] Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.764932 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.764984 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765019 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-tls-certs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765081 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzcz\" (UniqueName: \"kubernetes.io/projected/6ae02ece-f457-4943-92fe-9569b5083f41-kube-api-access-pvzcz\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-config-data\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765132 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765151 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmgv\" (UniqueName: \"kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765187 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-secret-key\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765211 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-scripts\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765277 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-combined-ca-bundle\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765304 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765377 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.765430 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae02ece-f457-4943-92fe-9569b5083f41-logs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.766921 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.767060 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.768605 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.776327 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.784871 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.794953 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.794927227 podStartE2EDuration="9.794927227s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:44:36.753914525 +0000 UTC m=+1009.111428667" watchObservedRunningTime="2025-11-24 12:44:36.794927227 +0000 UTC m=+1009.152441369" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.802017 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.806151 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmgv\" (UniqueName: \"kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv\") pod \"horizon-7b958b5cb8-lff28\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.866804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-tls-certs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.866879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzcz\" (UniqueName: \"kubernetes.io/projected/6ae02ece-f457-4943-92fe-9569b5083f41-kube-api-access-pvzcz\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.866922 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-config-data\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.866944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-secret-key\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.867009 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-scripts\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.867047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-combined-ca-bundle\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.867123 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae02ece-f457-4943-92fe-9569b5083f41-logs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.867633 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae02ece-f457-4943-92fe-9569b5083f41-logs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.868404 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-scripts\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.868894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ae02ece-f457-4943-92fe-9569b5083f41-config-data\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.872872 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-secret-key\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.880019 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-horizon-tls-certs\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.880722 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae02ece-f457-4943-92fe-9569b5083f41-combined-ca-bundle\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.893874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzcz\" (UniqueName: \"kubernetes.io/projected/6ae02ece-f457-4943-92fe-9569b5083f41-kube-api-access-pvzcz\") pod \"horizon-585c6478b8-gsbzg\" (UID: \"6ae02ece-f457-4943-92fe-9569b5083f41\") " pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:36 crc kubenswrapper[4756]: I1124 12:44:36.967556 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.064104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.752108 4756 generic.go:334] "Generic (PLEG): container finished" podID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerID="e8f2095c983fb2dfd2768a1215350a662c3fb0e054aa798b8690755b7d7df3cb" exitCode=0 Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.752168 4756 generic.go:334] "Generic (PLEG): container finished" podID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerID="6b4afb0d8867c53a07e623fc9578d315402290ad9bf770ed451d84bae2b5fba8" exitCode=143 Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.752399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerDied","Data":"e8f2095c983fb2dfd2768a1215350a662c3fb0e054aa798b8690755b7d7df3cb"} Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.752442 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerDied","Data":"6b4afb0d8867c53a07e623fc9578d315402290ad9bf770ed451d84bae2b5fba8"} Nov 24 12:44:37 crc kubenswrapper[4756]: I1124 12:44:37.752499 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:44:38 crc kubenswrapper[4756]: I1124 12:44:38.276694 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:44:38 crc kubenswrapper[4756]: I1124 12:44:38.451378 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:44:38 crc kubenswrapper[4756]: I1124 12:44:38.567744 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:44:38 crc kubenswrapper[4756]: I1124 12:44:38.568109 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" containerID="cri-o://162050712a6ce5a9edb1731cdcaaa1ec29485d901d92b1a2022f15e4259de83d" gracePeriod=10 Nov 24 12:44:38 crc kubenswrapper[4756]: I1124 12:44:38.865150 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:44:39 crc kubenswrapper[4756]: I1124 12:44:39.500239 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 12:44:39 crc kubenswrapper[4756]: I1124 12:44:39.882854 4756 generic.go:334] "Generic (PLEG): container finished" podID="af71255b-f52f-494b-8f54-fb4f4526a742" containerID="162050712a6ce5a9edb1731cdcaaa1ec29485d901d92b1a2022f15e4259de83d" exitCode=0 Nov 24 12:44:39 crc kubenswrapper[4756]: I1124 12:44:39.882929 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" event={"ID":"af71255b-f52f-494b-8f54-fb4f4526a742","Type":"ContainerDied","Data":"162050712a6ce5a9edb1731cdcaaa1ec29485d901d92b1a2022f15e4259de83d"} Nov 24 12:44:42 crc kubenswrapper[4756]: I1124 12:44:42.173090 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.276312 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.284382 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.797984 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.863762 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.863882 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqfwj\" (UniqueName: \"kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.863959 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.864086 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.864188 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.864249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys\") pod \"781c55f5-0031-40b5-9a12-d5bb53305150\" (UID: \"781c55f5-0031-40b5-9a12-d5bb53305150\") " Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.880863 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts" (OuterVolumeSpecName: "scripts") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.888252 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.890626 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.905798 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj" (OuterVolumeSpecName: "kube-api-access-fqfwj") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "kube-api-access-fqfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.909563 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.938391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data" (OuterVolumeSpecName: "config-data") pod "781c55f5-0031-40b5-9a12-d5bb53305150" (UID: "781c55f5-0031-40b5-9a12-d5bb53305150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.948213 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5xjt" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.948225 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5xjt" event={"ID":"781c55f5-0031-40b5-9a12-d5bb53305150","Type":"ContainerDied","Data":"1e2fc4d66e76c65a6a8a1b3a67f4902dcf131710f6c842efa165b8f17023a2d6"} Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.948564 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e2fc4d66e76c65a6a8a1b3a67f4902dcf131710f6c842efa165b8f17023a2d6" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.963005 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966275 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966735 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966745 4756 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966758 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966772 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqfwj\" (UniqueName: \"kubernetes.io/projected/781c55f5-0031-40b5-9a12-d5bb53305150-kube-api-access-fqfwj\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:43 crc kubenswrapper[4756]: I1124 12:44:43.966782 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781c55f5-0031-40b5-9a12-d5bb53305150-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:44 crc kubenswrapper[4756]: E1124 12:44:44.282703 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod781c55f5_0031_40b5_9a12_d5bb53305150.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod781c55f5_0031_40b5_9a12_d5bb53305150.slice/crio-1e2fc4d66e76c65a6a8a1b3a67f4902dcf131710f6c842efa165b8f17023a2d6\": RecentStats: unable to find data in memory cache]" Nov 24 12:44:44 crc kubenswrapper[4756]: I1124 12:44:44.889734 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b5xjt"] Nov 24 12:44:44 crc kubenswrapper[4756]: I1124 12:44:44.900602 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b5xjt"] Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.018837 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4kphn"] Nov 24 12:44:45 crc kubenswrapper[4756]: E1124 12:44:45.020264 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781c55f5-0031-40b5-9a12-d5bb53305150" containerName="keystone-bootstrap" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.020295 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="781c55f5-0031-40b5-9a12-d5bb53305150" containerName="keystone-bootstrap" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.020847 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="781c55f5-0031-40b5-9a12-d5bb53305150" containerName="keystone-bootstrap" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.022436 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.025849 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2m2wq" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.028110 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.028443 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.028670 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.030284 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.062403 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kphn"] Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105043 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4tnh\" (UniqueName: \"kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105095 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105232 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105249 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.105294 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207382 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207494 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tnh\" (UniqueName: \"kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207578 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.207635 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.214417 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.215914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.223105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.223306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.224308 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.236496 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tnh\" (UniqueName: \"kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh\") pod \"keystone-bootstrap-4kphn\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:45 crc kubenswrapper[4756]: I1124 12:44:45.369393 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:44:46 crc kubenswrapper[4756]: I1124 12:44:46.487432 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781c55f5-0031-40b5-9a12-d5bb53305150" path="/var/lib/kubelet/pods/781c55f5-0031-40b5-9a12-d5bb53305150/volumes" Nov 24 12:44:47 crc kubenswrapper[4756]: I1124 12:44:47.644818 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:44:47 crc kubenswrapper[4756]: I1124 12:44:47.645575 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" containerID="cri-o://9e693877be013cc3325149bb4442619bfda064dcda426f4ea1dd3c73e823a82c" gracePeriod=30 Nov 24 12:44:47 crc kubenswrapper[4756]: I1124 12:44:47.646112 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" containerID="cri-o://7fdd4baadb6f98e25135fb7b3bfe8f63130b57de9beefc19fbfe6320cee2527b" gracePeriod=30 Nov 24 12:44:48 crc kubenswrapper[4756]: I1124 12:44:48.008582 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerID="9e693877be013cc3325149bb4442619bfda064dcda426f4ea1dd3c73e823a82c" exitCode=143 Nov 24 12:44:48 crc kubenswrapper[4756]: I1124 12:44:48.008636 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerDied","Data":"9e693877be013cc3325149bb4442619bfda064dcda426f4ea1dd3c73e823a82c"} Nov 24 12:44:50 crc kubenswrapper[4756]: I1124 12:44:50.778875 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:36834->10.217.0.161:9322: read: connection reset by peer" Nov 24 12:44:50 crc kubenswrapper[4756]: I1124 12:44:50.778926 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:36826->10.217.0.161:9322: read: connection reset by peer" Nov 24 12:44:52 crc kubenswrapper[4756]: I1124 12:44:52.060968 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerID="7fdd4baadb6f98e25135fb7b3bfe8f63130b57de9beefc19fbfe6320cee2527b" exitCode=0 Nov 24 12:44:52 crc kubenswrapper[4756]: I1124 12:44:52.061015 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerDied","Data":"7fdd4baadb6f98e25135fb7b3bfe8f63130b57de9beefc19fbfe6320cee2527b"} Nov 24 12:44:52 crc kubenswrapper[4756]: I1124 12:44:52.232291 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Nov 24 12:44:53 crc kubenswrapper[4756]: I1124 12:44:53.276610 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Nov 24 12:44:53 crc kubenswrapper[4756]: I1124 12:44:53.276597 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": dial tcp 10.217.0.161:9322: connect: connection refused" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.638560 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.638746 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n576h5ffh5bdh7fh56dh64h54h646h59dh5f7h95h5f6h666h558h67fhb8h56ch5bbhb6h54fh67fh84h684h678hc9h5c8h59bh659h54fhc7h569h66dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb5cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-84746cc9cc-f9r4d_openstack(0286b63b-3a9a-4623-ae9c-7032413a5154): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.642243 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-84746cc9cc-f9r4d" podUID="0286b63b-3a9a-4623-ae9c-7032413a5154" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.669868 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.670835 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f6h8dh7dhc6h667h7chf4h75hd4h697h698h697h5d4hc5h89h67dhb5h5f9h664h77h565h65dh64dh578h5f4h5c9h657h84h57h67dh5b5h5bfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvh2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b954b9689-cp7gf_openstack(15dd5667-6a2e-48d3-8fd8-bccfb555a7c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:53 crc kubenswrapper[4756]: E1124 12:44:53.673964 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b954b9689-cp7gf" podUID="15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" Nov 24 12:44:55 crc kubenswrapper[4756]: E1124 12:44:55.528914 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 12:44:55 crc kubenswrapper[4756]: E1124 12:44:55.529439 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9h5b5h578hfbh5f8h5bh578h64fh59dh575hd4h68h588hdh56dh5b9h9ch69h5d9h5dbh5ch79h569h576h8bh7dhbdh574h548h7bh57dh65bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdzds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-797c846fdf-pcdw6_openstack(afe9dc1a-7891-46c4-a813-56ae99f0f886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:55 crc kubenswrapper[4756]: E1124 12:44:55.553362 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-797c846fdf-pcdw6" podUID="afe9dc1a-7891-46c4-a813-56ae99f0f886" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.630730 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.638468 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.751761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.751831 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.751871 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lv4c\" (UniqueName: \"kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.751905 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.751963 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752017 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752035 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752086 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752106 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wnn\" (UniqueName: \"kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752146 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752222 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752271 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752293 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752311 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs\") pod \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\" (UID: \"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752435 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.752761 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.753075 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs" (OuterVolumeSpecName: "logs") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.753101 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.756920 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs" (OuterVolumeSpecName: "logs") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.757308 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c" (OuterVolumeSpecName: "kube-api-access-7lv4c") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "kube-api-access-7lv4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.760641 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.760696 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts" (OuterVolumeSpecName: "scripts") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.776448 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts" (OuterVolumeSpecName: "scripts") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.788086 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.790411 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn" (OuterVolumeSpecName: "kube-api-access-72wnn") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "kube-api-access-72wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.819657 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.828459 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.848448 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data" (OuterVolumeSpecName: "config-data") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.849301 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data" (OuterVolumeSpecName: "config-data") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.853781 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.854874 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") pod \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\" (UID: \"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f\") " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855510 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855536 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855546 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855564 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lv4c\" (UniqueName: \"kubernetes.io/projected/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-kube-api-access-7lv4c\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855574 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855583 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855592 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855602 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wnn\" (UniqueName: \"kubernetes.io/projected/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-kube-api-access-72wnn\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855611 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855620 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855648 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855657 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.855672 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 12:44:55 crc kubenswrapper[4756]: W1124 12:44:55.856111 4756 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f/volumes/kubernetes.io~secret/public-tls-certs Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.856132 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" (UID: "36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.877693 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.883024 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.887635 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" (UID: "719963c5-57e8-4ada-8d76-bf0b7f8c9ebc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.957769 4756 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.957795 4756 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.957806 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:55 crc kubenswrapper[4756]: I1124 12:44:55.957815 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.061320 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.061582 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch8bh676h658h557hbfh9dh64ch5b6hb9h675h55ch66ch57dh586h87hfdh689h68h57dh87h8fhf4h545h5chb9h55chf8h659h54h594h689q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ff5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(03921298-d6d8-404c-9ee5-c5101a92892e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.066237 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.118186 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719963c5-57e8-4ada-8d76-bf0b7f8c9ebc","Type":"ContainerDied","Data":"c95beb9afb2672dec99bc4ddcd305f4b2ad8bb426aa7965f8848d14efdd3f1ed"} Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.118237 4756 scope.go:117] "RemoveContainer" containerID="e8f2095c983fb2dfd2768a1215350a662c3fb0e054aa798b8690755b7d7df3cb" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.118355 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.123263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" event={"ID":"af71255b-f52f-494b-8f54-fb4f4526a742","Type":"ContainerDied","Data":"d3cac5cd8661e9ca5a7e4d138d41b7c52301c2ef307256f894dffa2e71ab1c5d"} Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.123349 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.127959 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f","Type":"ContainerDied","Data":"0a73be05eed8ae401fed120bacd5f7f3c1d1ef49c8e20159dff10e6987ed4606"} Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.128108 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.160420 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config\") pod \"af71255b-f52f-494b-8f54-fb4f4526a742\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.160501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb\") pod \"af71255b-f52f-494b-8f54-fb4f4526a742\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.160598 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb\") pod \"af71255b-f52f-494b-8f54-fb4f4526a742\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.160617 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmr7d\" (UniqueName: \"kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d\") pod \"af71255b-f52f-494b-8f54-fb4f4526a742\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.160635 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc\") pod \"af71255b-f52f-494b-8f54-fb4f4526a742\" (UID: \"af71255b-f52f-494b-8f54-fb4f4526a742\") " Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.191362 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d" (OuterVolumeSpecName: "kube-api-access-cmr7d") pod "af71255b-f52f-494b-8f54-fb4f4526a742" (UID: "af71255b-f52f-494b-8f54-fb4f4526a742"). InnerVolumeSpecName "kube-api-access-cmr7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.228822 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.255147 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.269829 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.274448 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmr7d\" (UniqueName: \"kubernetes.io/projected/af71255b-f52f-494b-8f54-fb4f4526a742-kube-api-access-cmr7d\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.291228 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.300712 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301206 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301224 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301253 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301260 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301272 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="init" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301277 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="init" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301289 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301295 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301308 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301314 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: E1124 12:44:56.301325 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301331 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301501 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301515 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301528 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301538 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-httpd" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.301546 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" containerName="glance-log" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.302581 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.306912 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.306978 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config" (OuterVolumeSpecName: "config") pod "af71255b-f52f-494b-8f54-fb4f4526a742" (UID: "af71255b-f52f-494b-8f54-fb4f4526a742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.307334 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.312628 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.316149 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.336418 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.336610 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qx8l5" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.336831 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.337196 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.338643 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.356364 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.358389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af71255b-f52f-494b-8f54-fb4f4526a742" (UID: "af71255b-f52f-494b-8f54-fb4f4526a742"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388007 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388057 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388426 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388528 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388616 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388642 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpsr\" (UniqueName: \"kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388704 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.388741 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.405656 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af71255b-f52f-494b-8f54-fb4f4526a742" (UID: "af71255b-f52f-494b-8f54-fb4f4526a742"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.411885 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af71255b-f52f-494b-8f54-fb4f4526a742" (UID: "af71255b-f52f-494b-8f54-fb4f4526a742"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496189 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496260 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496291 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496333 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496367 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496396 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496451 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496481 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496498 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpsr\" (UniqueName: \"kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496530 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496566 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslpx\" (UniqueName: \"kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496605 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496655 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.496666 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af71255b-f52f-494b-8f54-fb4f4526a742-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.497074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.497314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.501126 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.508942 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.515703 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.517983 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.543355 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.562015 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpsr\" (UniqueName: \"kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.585830 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f" path="/var/lib/kubelet/pods/36cfea9c-8ce7-4c90-a2cd-fa3c7f6dab7f/volumes" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.586929 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719963c5-57e8-4ada-8d76-bf0b7f8c9ebc" path="/var/lib/kubelet/pods/719963c5-57e8-4ada-8d76-bf0b7f8c9ebc/volumes" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.590693 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.603742 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.603807 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.603979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.605748 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lslpx\" (UniqueName: \"kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.605909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.606079 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.606165 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.606213 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.606549 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.607052 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.607191 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.607269 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.623338 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.624094 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.629006 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.646467 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lslpx\" (UniqueName: \"kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.649818 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.743546 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.753032 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.777558 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:44:56 crc kubenswrapper[4756]: I1124 12:44:56.793258 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-5vk97"] Nov 24 12:44:57 crc kubenswrapper[4756]: I1124 12:44:57.233262 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-5vk97" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Nov 24 12:44:58 crc kubenswrapper[4756]: I1124 12:44:58.490391 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af71255b-f52f-494b-8f54-fb4f4526a742" path="/var/lib/kubelet/pods/af71255b-f52f-494b-8f54-fb4f4526a742/volumes" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.147427 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl"] Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.149104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.152788 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.152925 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.160479 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl"] Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.189255 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.189637 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.189856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24sqp\" (UniqueName: \"kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.207494 4756 generic.go:334] "Generic (PLEG): container finished" podID="a6f689ad-3620-48b3-ae57-d19148ecb376" containerID="bfc21104f20e44b50196ee6e3e300bc2a3506de3eaa9357e6749792fd13c36c7" exitCode=0 Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.207550 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wrtzq" event={"ID":"a6f689ad-3620-48b3-ae57-d19148ecb376","Type":"ContainerDied","Data":"bfc21104f20e44b50196ee6e3e300bc2a3506de3eaa9357e6749792fd13c36c7"} Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.292565 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.292681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.292774 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24sqp\" (UniqueName: \"kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.294401 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.302581 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.314432 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24sqp\" (UniqueName: \"kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp\") pod \"collect-profiles-29399805-t6tkl\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:00 crc kubenswrapper[4756]: I1124 12:45:00.491655 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:03 crc kubenswrapper[4756]: I1124 12:45:03.275979 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:45:03 crc kubenswrapper[4756]: I1124 12:45:03.276088 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:45:03 crc kubenswrapper[4756]: I1124 12:45:03.276574 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:45:03 crc kubenswrapper[4756]: I1124 12:45:03.276667 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.724268 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.731412 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.930907 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs\") pod \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931009 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvh2r\" (UniqueName: \"kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r\") pod \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931041 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts\") pod \"0286b63b-3a9a-4623-ae9c-7032413a5154\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931094 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz\") pod \"0286b63b-3a9a-4623-ae9c-7032413a5154\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931123 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data\") pod \"0286b63b-3a9a-4623-ae9c-7032413a5154\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931298 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key\") pod \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931385 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs\") pod \"0286b63b-3a9a-4623-ae9c-7032413a5154\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931433 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key\") pod \"0286b63b-3a9a-4623-ae9c-7032413a5154\" (UID: \"0286b63b-3a9a-4623-ae9c-7032413a5154\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931543 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data\") pod \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.931577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts\") pod \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\" (UID: \"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5\") " Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.932512 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs" (OuterVolumeSpecName: "logs") pod "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" (UID: "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.932673 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs" (OuterVolumeSpecName: "logs") pod "0286b63b-3a9a-4623-ae9c-7032413a5154" (UID: "0286b63b-3a9a-4623-ae9c-7032413a5154"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.932883 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts" (OuterVolumeSpecName: "scripts") pod "0286b63b-3a9a-4623-ae9c-7032413a5154" (UID: "0286b63b-3a9a-4623-ae9c-7032413a5154"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.932937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts" (OuterVolumeSpecName: "scripts") pod "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" (UID: "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.933191 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data" (OuterVolumeSpecName: "config-data") pod "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" (UID: "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.933242 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data" (OuterVolumeSpecName: "config-data") pod "0286b63b-3a9a-4623-ae9c-7032413a5154" (UID: "0286b63b-3a9a-4623-ae9c-7032413a5154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.936782 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" (UID: "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.936849 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz" (OuterVolumeSpecName: "kube-api-access-bb5cz") pod "0286b63b-3a9a-4623-ae9c-7032413a5154" (UID: "0286b63b-3a9a-4623-ae9c-7032413a5154"). InnerVolumeSpecName "kube-api-access-bb5cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.943297 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r" (OuterVolumeSpecName: "kube-api-access-gvh2r") pod "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" (UID: "15dd5667-6a2e-48d3-8fd8-bccfb555a7c5"). InnerVolumeSpecName "kube-api-access-gvh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:05 crc kubenswrapper[4756]: I1124 12:45:05.952321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0286b63b-3a9a-4623-ae9c-7032413a5154" (UID: "0286b63b-3a9a-4623-ae9c-7032413a5154"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033644 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033681 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033690 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033699 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvh2r\" (UniqueName: \"kubernetes.io/projected/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-kube-api-access-gvh2r\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033709 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033717 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/0286b63b-3a9a-4623-ae9c-7032413a5154-kube-api-access-bb5cz\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033725 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0286b63b-3a9a-4623-ae9c-7032413a5154-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033735 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033745 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0286b63b-3a9a-4623-ae9c-7032413a5154-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.033752 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0286b63b-3a9a-4623-ae9c-7032413a5154-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: E1124 12:45:06.218445 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 12:45:06 crc kubenswrapper[4756]: E1124 12:45:06.218648 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hz4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wz84p_openstack(8f485ab9-01fd-4640-833e-8ee586798f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:45:06 crc kubenswrapper[4756]: E1124 12:45:06.220083 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wz84p" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.259286 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.275324 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wrtzq" event={"ID":"a6f689ad-3620-48b3-ae57-d19148ecb376","Type":"ContainerDied","Data":"d51bb7f2c738ffecd21ce960c2d6d19c6e2c14623ca0d8928fe5c0e1807bdb84"} Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.275367 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51bb7f2c738ffecd21ce960c2d6d19c6e2c14623ca0d8928fe5c0e1807bdb84" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.276511 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84746cc9cc-f9r4d" event={"ID":"0286b63b-3a9a-4623-ae9c-7032413a5154","Type":"ContainerDied","Data":"1cc68b32e8fb10711a9d7006cdb3ebd9e61a858a050fb91376b716bb66b30d3d"} Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.276613 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84746cc9cc-f9r4d" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.280967 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b954b9689-cp7gf" event={"ID":"15dd5667-6a2e-48d3-8fd8-bccfb555a7c5","Type":"ContainerDied","Data":"1aed797411e1da4d22f44e37c2715d90f2201a02ad8e5ef36d30456498583a4e"} Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.281054 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b954b9689-cp7gf" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.281372 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.282287 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.300852 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"2c3bc094-a826-40f4-ba40-2525d31a13e1","Type":"ContainerDied","Data":"9e8125b8df0063fd22f6af1a3ad89d366cb186d9c17a3c92e5f8890e2e120e3d"} Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.307027 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797c846fdf-pcdw6" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.307519 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797c846fdf-pcdw6" event={"ID":"afe9dc1a-7891-46c4-a813-56ae99f0f886","Type":"ContainerDied","Data":"5a3d4705a5ef5215f589571b657cdc454630920cdfb3898fe9f0fdc5498c3bb4"} Nov 24 12:45:06 crc kubenswrapper[4756]: E1124 12:45:06.315111 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wz84p" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.337688 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs\") pod \"afe9dc1a-7891-46c4-a813-56ae99f0f886\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.337757 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data\") pod \"afe9dc1a-7891-46c4-a813-56ae99f0f886\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.337813 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdzds\" (UniqueName: \"kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds\") pod \"afe9dc1a-7891-46c4-a813-56ae99f0f886\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.337834 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca\") pod \"2c3bc094-a826-40f4-ba40-2525d31a13e1\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338176 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts\") pod \"afe9dc1a-7891-46c4-a813-56ae99f0f886\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338233 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs\") pod \"2c3bc094-a826-40f4-ba40-2525d31a13e1\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338275 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2dt4\" (UniqueName: \"kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4\") pod \"a6f689ad-3620-48b3-ae57-d19148ecb376\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338300 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data\") pod \"2c3bc094-a826-40f4-ba40-2525d31a13e1\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338333 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9vwf\" (UniqueName: \"kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf\") pod \"2c3bc094-a826-40f4-ba40-2525d31a13e1\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338384 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config\") pod \"a6f689ad-3620-48b3-ae57-d19148ecb376\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338414 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key\") pod \"afe9dc1a-7891-46c4-a813-56ae99f0f886\" (UID: \"afe9dc1a-7891-46c4-a813-56ae99f0f886\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338458 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle\") pod \"a6f689ad-3620-48b3-ae57-d19148ecb376\" (UID: \"a6f689ad-3620-48b3-ae57-d19148ecb376\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.338479 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle\") pod \"2c3bc094-a826-40f4-ba40-2525d31a13e1\" (UID: \"2c3bc094-a826-40f4-ba40-2525d31a13e1\") " Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.339136 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data" (OuterVolumeSpecName: "config-data") pod "afe9dc1a-7891-46c4-a813-56ae99f0f886" (UID: "afe9dc1a-7891-46c4-a813-56ae99f0f886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.340018 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs" (OuterVolumeSpecName: "logs") pod "afe9dc1a-7891-46c4-a813-56ae99f0f886" (UID: "afe9dc1a-7891-46c4-a813-56ae99f0f886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.340464 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts" (OuterVolumeSpecName: "scripts") pod "afe9dc1a-7891-46c4-a813-56ae99f0f886" (UID: "afe9dc1a-7891-46c4-a813-56ae99f0f886"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.342528 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs" (OuterVolumeSpecName: "logs") pod "2c3bc094-a826-40f4-ba40-2525d31a13e1" (UID: "2c3bc094-a826-40f4-ba40-2525d31a13e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.351267 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4" (OuterVolumeSpecName: "kube-api-access-j2dt4") pod "a6f689ad-3620-48b3-ae57-d19148ecb376" (UID: "a6f689ad-3620-48b3-ae57-d19148ecb376"). InnerVolumeSpecName "kube-api-access-j2dt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.389238 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "afe9dc1a-7891-46c4-a813-56ae99f0f886" (UID: "afe9dc1a-7891-46c4-a813-56ae99f0f886"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.389232 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf" (OuterVolumeSpecName: "kube-api-access-s9vwf") pod "2c3bc094-a826-40f4-ba40-2525d31a13e1" (UID: "2c3bc094-a826-40f4-ba40-2525d31a13e1"). InnerVolumeSpecName "kube-api-access-s9vwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.389899 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds" (OuterVolumeSpecName: "kube-api-access-mdzds") pod "afe9dc1a-7891-46c4-a813-56ae99f0f886" (UID: "afe9dc1a-7891-46c4-a813-56ae99f0f886"). InnerVolumeSpecName "kube-api-access-mdzds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.400883 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.419555 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b954b9689-cp7gf"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.431361 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config" (OuterVolumeSpecName: "config") pod "a6f689ad-3620-48b3-ae57-d19148ecb376" (UID: "a6f689ad-3620-48b3-ae57-d19148ecb376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.439403 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c3bc094-a826-40f4-ba40-2525d31a13e1" (UID: "2c3bc094-a826-40f4-ba40-2525d31a13e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440507 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440555 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3bc094-a826-40f4-ba40-2525d31a13e1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440564 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2dt4\" (UniqueName: \"kubernetes.io/projected/a6f689ad-3620-48b3-ae57-d19148ecb376-kube-api-access-j2dt4\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440574 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9vwf\" (UniqueName: \"kubernetes.io/projected/2c3bc094-a826-40f4-ba40-2525d31a13e1-kube-api-access-s9vwf\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440583 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440597 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/afe9dc1a-7891-46c4-a813-56ae99f0f886-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440605 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440614 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe9dc1a-7891-46c4-a813-56ae99f0f886-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440622 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afe9dc1a-7891-46c4-a813-56ae99f0f886-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.440630 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdzds\" (UniqueName: \"kubernetes.io/projected/afe9dc1a-7891-46c4-a813-56ae99f0f886-kube-api-access-mdzds\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.449214 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.455099 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84746cc9cc-f9r4d"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.460517 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6f689ad-3620-48b3-ae57-d19148ecb376" (UID: "a6f689ad-3620-48b3-ae57-d19148ecb376"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.469865 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2c3bc094-a826-40f4-ba40-2525d31a13e1" (UID: "2c3bc094-a826-40f4-ba40-2525d31a13e1"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.483446 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data" (OuterVolumeSpecName: "config-data") pod "2c3bc094-a826-40f4-ba40-2525d31a13e1" (UID: "2c3bc094-a826-40f4-ba40-2525d31a13e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.491032 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0286b63b-3a9a-4623-ae9c-7032413a5154" path="/var/lib/kubelet/pods/0286b63b-3a9a-4623-ae9c-7032413a5154/volumes" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.493044 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15dd5667-6a2e-48d3-8fd8-bccfb555a7c5" path="/var/lib/kubelet/pods/15dd5667-6a2e-48d3-8fd8-bccfb555a7c5/volumes" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.545927 4756 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.545960 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3bc094-a826-40f4-ba40-2525d31a13e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.545970 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f689ad-3620-48b3-ae57-d19148ecb376-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.724024 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.732781 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-797c846fdf-pcdw6"] Nov 24 12:45:06 crc kubenswrapper[4756]: I1124 12:45:06.739748 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-585c6478b8-gsbzg"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.314691 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.314972 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wrtzq" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.348730 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.363722 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.376289 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.376738 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f689ad-3620-48b3-ae57-d19148ecb376" containerName="neutron-db-sync" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.376759 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f689ad-3620-48b3-ae57-d19148ecb376" containerName="neutron-db-sync" Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.376781 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.376789 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.376802 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.376808 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.377032 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f689ad-3620-48b3-ae57-d19148ecb376" containerName="neutron-db-sync" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.377053 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.377065 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.378588 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.381151 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.381688 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.381932 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.405303 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.557791 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.558005 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzkbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vbnrc_openstack(e5e34263-c415-4300-a110-ab2ad6787566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 12:45:07 crc kubenswrapper[4756]: E1124 12:45:07.559507 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vbnrc" podUID="e5e34263-c415-4300-a110-ab2ad6787566" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563584 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-config-data\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563757 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttq2\" (UniqueName: \"kubernetes.io/projected/c84e5d83-17b7-47c7-9952-9e6942940b2a-kube-api-access-mttq2\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563796 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c84e5d83-17b7-47c7-9952-9e6942940b2a-logs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563826 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.563884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.564050 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.609400 4756 scope.go:117] "RemoveContainer" containerID="6b4afb0d8867c53a07e623fc9578d315402290ad9bf770ed451d84bae2b5fba8" Nov 24 12:45:07 crc kubenswrapper[4756]: W1124 12:45:07.623523 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ae02ece_f457_4943_92fe_9569b5083f41.slice/crio-5c0f07fa762c82f2ae93c72ddc8654004b5975ac56a0e3e2efc7cb95b9ed2432 WatchSource:0}: Error finding container 5c0f07fa762c82f2ae93c72ddc8654004b5975ac56a0e3e2efc7cb95b9ed2432: Status 404 returned error can't find the container with id 5c0f07fa762c82f2ae93c72ddc8654004b5975ac56a0e3e2efc7cb95b9ed2432 Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.643746 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.645321 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.665340 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttq2\" (UniqueName: \"kubernetes.io/projected/c84e5d83-17b7-47c7-9952-9e6942940b2a-kube-api-access-mttq2\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.665394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c84e5d83-17b7-47c7-9952-9e6942940b2a-logs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.665412 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.665527 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.666657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c84e5d83-17b7-47c7-9952-9e6942940b2a-logs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.667271 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.667325 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.667361 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-config-data\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.689023 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.689626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.690110 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.702105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-config-data\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.703422 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.703779 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c84e5d83-17b7-47c7-9952-9e6942940b2a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.725825 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.732302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttq2\" (UniqueName: \"kubernetes.io/projected/c84e5d83-17b7-47c7-9952-9e6942940b2a-kube-api-access-mttq2\") pod \"watcher-api-0\" (UID: \"c84e5d83-17b7-47c7-9952-9e6942940b2a\") " pod="openstack/watcher-api-0" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.735678 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.759279 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rv2mk" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.759533 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.759985 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.761768 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.769758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.769826 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.769881 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzqf\" (UniqueName: \"kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.769920 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.770033 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.770246 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.851077 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872582 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872641 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzqf\" (UniqueName: \"kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlngz\" (UniqueName: \"kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872695 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872717 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872771 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872813 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872903 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.872942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.874706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.874922 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.875835 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.875916 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.876009 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.895956 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzqf\" (UniqueName: \"kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf\") pod \"dnsmasq-dns-6b7b667979-fwcbh\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.976010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.976127 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlngz\" (UniqueName: \"kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.976268 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.976472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.976581 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.982096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.982967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.985206 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.985726 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:07 crc kubenswrapper[4756]: I1124 12:45:07.997475 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlngz\" (UniqueName: \"kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz\") pod \"neutron-75ff655f7b-j2rvr\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.006850 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.077195 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.104568 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.221319 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.277135 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.277247 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:45:08 crc kubenswrapper[4756]: W1124 12:45:08.325639 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac680988_de91_4b39_ac09_3938cd5a2f91.slice/crio-8cbe0b3ecfef46f7d68524226ae308001ff6c658bdf2936cfebcadb0fe5fb156 WatchSource:0}: Error finding container 8cbe0b3ecfef46f7d68524226ae308001ff6c658bdf2936cfebcadb0fe5fb156: Status 404 returned error can't find the container with id 8cbe0b3ecfef46f7d68524226ae308001ff6c658bdf2936cfebcadb0fe5fb156 Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.327347 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585c6478b8-gsbzg" event={"ID":"6ae02ece-f457-4943-92fe-9569b5083f41","Type":"ContainerStarted","Data":"5c0f07fa762c82f2ae93c72ddc8654004b5975ac56a0e3e2efc7cb95b9ed2432"} Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.382695 4756 scope.go:117] "RemoveContainer" containerID="162050712a6ce5a9edb1731cdcaaa1ec29485d901d92b1a2022f15e4259de83d" Nov 24 12:45:08 crc kubenswrapper[4756]: E1124 12:45:08.382962 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vbnrc" podUID="e5e34263-c415-4300-a110-ab2ad6787566" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.509191 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3bc094-a826-40f4-ba40-2525d31a13e1" path="/var/lib/kubelet/pods/2c3bc094-a826-40f4-ba40-2525d31a13e1/volumes" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.511527 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe9dc1a-7891-46c4-a813-56ae99f0f886" path="/var/lib/kubelet/pods/afe9dc1a-7891-46c4-a813-56ae99f0f886/volumes" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.566255 4756 scope.go:117] "RemoveContainer" containerID="7712822a54e3ddeb1ddfac7d35d6093dc0978ce2e147c44641792a7eca82e607" Nov 24 12:45:08 crc kubenswrapper[4756]: I1124 12:45:08.969691 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kphn"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.108560 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.219512 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.232651 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.348645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerStarted","Data":"8cbe0b3ecfef46f7d68524226ae308001ff6c658bdf2936cfebcadb0fe5fb156"} Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.367033 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.458799 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Nov 24 12:45:09 crc kubenswrapper[4756]: I1124 12:45:09.542480 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.105508 4756 scope.go:117] "RemoveContainer" containerID="ad2c8404b45ba392be9a1f43000ceb015285ca88a370649e8d717397ddfcabf2" Nov 24 12:45:10 crc kubenswrapper[4756]: W1124 12:45:10.136063 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3efa48e0_4cac_4152_8920_74324d606778.slice/crio-f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01 WatchSource:0}: Error finding container f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01: Status 404 returned error can't find the container with id f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01 Nov 24 12:45:10 crc kubenswrapper[4756]: W1124 12:45:10.140666 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52caf4d5_4b74_438c_81cf_6b084ba79352.slice/crio-a608118f943333494da942af524833d7e86f3bd7e3a6fbef0866d9f658f31cca WatchSource:0}: Error finding container a608118f943333494da942af524833d7e86f3bd7e3a6fbef0866d9f658f31cca: Status 404 returned error can't find the container with id a608118f943333494da942af524833d7e86f3bd7e3a6fbef0866d9f658f31cca Nov 24 12:45:10 crc kubenswrapper[4756]: W1124 12:45:10.143339 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d2fe035_f2da_4a24_9796_1bdeb6198091.slice/crio-edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035 WatchSource:0}: Error finding container edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035: Status 404 returned error can't find the container with id edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035 Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.190867 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.279322 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b4ddfdbf7-zd9s5"] Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.282316 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.285479 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.285981 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.304036 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4ddfdbf7-zd9s5"] Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.347741 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-httpd-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.347788 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-ovndb-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.347856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-public-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.349542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.349630 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-internal-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.349748 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwql\" (UniqueName: \"kubernetes.io/projected/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-kube-api-access-spwql\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.349807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-combined-ca-bundle\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.379771 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerStarted","Data":"088d4c7b1f64cc74b39405e8e8df3992e2c0e9f2f9e810b4ab9615dc61b68b4b"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.382572 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerStarted","Data":"5edffee6f2b288416682950636922c58335e3fe25c0dea7138a5c70cf93f0e2c"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.388369 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c84e5d83-17b7-47c7-9952-9e6942940b2a","Type":"ContainerStarted","Data":"ec3fd01b16246c64ce945b46c30f39796a0f8b8a640c28862fded1261a8263e1"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.392763 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kphn" event={"ID":"3efa48e0-4cac-4152-8920-74324d606778","Type":"ContainerStarted","Data":"f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.395870 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" event={"ID":"6d2fe035-f2da-4a24-9796-1bdeb6198091","Type":"ContainerStarted","Data":"edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.398080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" event={"ID":"f48f5095-0233-463e-b8ce-7bf7bf6e51e3","Type":"ContainerStarted","Data":"30e621b57d1389928d31f41130fad1b857f9963f5e73b93b5d8e2f254eb9a157"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.399049 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerStarted","Data":"a608118f943333494da942af524833d7e86f3bd7e3a6fbef0866d9f658f31cca"} Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.435606 4756 scope.go:117] "RemoveContainer" containerID="438f37723b29e8d1d50f2336abacbd2db4575e9ec3df901853b6eca5210ea2ad" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451391 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-public-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451469 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451502 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-internal-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451791 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwql\" (UniqueName: \"kubernetes.io/projected/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-kube-api-access-spwql\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451831 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-combined-ca-bundle\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.451877 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-httpd-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.452305 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-ovndb-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.458652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-combined-ca-bundle\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.460324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-public-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.461265 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-httpd-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.462116 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-config\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.462666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-ovndb-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.464143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-internal-tls-certs\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.472750 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwql\" (UniqueName: \"kubernetes.io/projected/d2f5b7c5-30dd-4145-a18a-fe929e4d660a-kube-api-access-spwql\") pod \"neutron-7b4ddfdbf7-zd9s5\" (UID: \"d2f5b7c5-30dd-4145-a18a-fe929e4d660a\") " pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.511437 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.566862 4756 scope.go:117] "RemoveContainer" containerID="7fdd4baadb6f98e25135fb7b3bfe8f63130b57de9beefc19fbfe6320cee2527b" Nov 24 12:45:10 crc kubenswrapper[4756]: I1124 12:45:10.682246 4756 scope.go:117] "RemoveContainer" containerID="9e693877be013cc3325149bb4442619bfda064dcda426f4ea1dd3c73e823a82c" Nov 24 12:45:11 crc kubenswrapper[4756]: I1124 12:45:11.014193 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b4ddfdbf7-zd9s5"] Nov 24 12:45:11 crc kubenswrapper[4756]: W1124 12:45:11.017573 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f5b7c5_30dd_4145_a18a_fe929e4d660a.slice/crio-c02d5aabf48d42033b9975bc14b51dee467e17e4416712328f9dc37e2940b509 WatchSource:0}: Error finding container c02d5aabf48d42033b9975bc14b51dee467e17e4416712328f9dc37e2940b509: Status 404 returned error can't find the container with id c02d5aabf48d42033b9975bc14b51dee467e17e4416712328f9dc37e2940b509 Nov 24 12:45:11 crc kubenswrapper[4756]: I1124 12:45:11.424607 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4ddfdbf7-zd9s5" event={"ID":"d2f5b7c5-30dd-4145-a18a-fe929e4d660a","Type":"ContainerStarted","Data":"c02d5aabf48d42033b9975bc14b51dee467e17e4416712328f9dc37e2940b509"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.460283 4756 generic.go:334] "Generic (PLEG): container finished" podID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerID="ea8904a79318528b72df8c293c7091235959c470516d84558bc657123e09e126" exitCode=0 Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.461914 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" event={"ID":"f48f5095-0233-463e-b8ce-7bf7bf6e51e3","Type":"ContainerDied","Data":"ea8904a79318528b72df8c293c7091235959c470516d84558bc657123e09e126"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.542226 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4kphn" podStartSLOduration=28.542206603 podStartE2EDuration="28.542206603s" podCreationTimestamp="2025-11-24 12:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:12.536564678 +0000 UTC m=+1044.894078830" watchObservedRunningTime="2025-11-24 12:45:12.542206603 +0000 UTC m=+1044.899720755" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.585751 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=8.969486399 podStartE2EDuration="40.585733145s" podCreationTimestamp="2025-11-24 12:44:32 +0000 UTC" firstStartedPulling="2025-11-24 12:44:33.998198651 +0000 UTC m=+1006.355712783" lastFinishedPulling="2025-11-24 12:45:05.614445387 +0000 UTC m=+1037.971959529" observedRunningTime="2025-11-24 12:45:12.581789537 +0000 UTC m=+1044.939303669" watchObservedRunningTime="2025-11-24 12:45:12.585733145 +0000 UTC m=+1044.943247287" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.599939 4756 generic.go:334] "Generic (PLEG): container finished" podID="6d2fe035-f2da-4a24-9796-1bdeb6198091" containerID="0240db7d1e843641993b9faa1aecf00bcf4ab4fbdea6e2b1f6ff60793b9fc48c" exitCode=0 Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kphn" event={"ID":"3efa48e0-4cac-4152-8920-74324d606778","Type":"ContainerStarted","Data":"0e8549083469d6c6c9629a847ea97c5f2f5c3c8a75454bc7a7442d5004766ab6"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d","Type":"ContainerStarted","Data":"0076564da24a803e4f671c7c84babd1cd4d8489c72036d90ca7cb73e02e2690b"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624577 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585c6478b8-gsbzg" event={"ID":"6ae02ece-f457-4943-92fe-9569b5083f41","Type":"ContainerStarted","Data":"db0b3287652229ca1aab77f91e95fb536365b366e7bae679404f78192f3b48f5"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624590 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" event={"ID":"6d2fe035-f2da-4a24-9796-1bdeb6198091","Type":"ContainerDied","Data":"0240db7d1e843641993b9faa1aecf00bcf4ab4fbdea6e2b1f6ff60793b9fc48c"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerStarted","Data":"c50b718041377a553045016e4b00c94cdc1e71857eb88178cbdb72ba0348e8ef"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624611 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9bg6g" event={"ID":"ee404871-3e83-4fa1-a773-df0c95222c32","Type":"ContainerStarted","Data":"635fe80f24e6a7a1873834ce53a8e72cd5fc4fc981da9a13a467ad87cd828a3a"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624621 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerStarted","Data":"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.624630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerStarted","Data":"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.638793 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4ddfdbf7-zd9s5" event={"ID":"d2f5b7c5-30dd-4145-a18a-fe929e4d660a","Type":"ContainerStarted","Data":"b5cd941925cc7859763607b02a17b609d643bc6c25f82a765b3fc89276ab1f56"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.649660 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9988f7-fa01-4411-986d-ac6ba024a7a5","Type":"ContainerStarted","Data":"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.664439 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c84e5d83-17b7-47c7-9952-9e6942940b2a","Type":"ContainerStarted","Data":"d4dfb30563238af0bd18ef602a9edb624a664d0d323f8921b453457656cc3d8a"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.671144 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9bg6g" podStartSLOduration=10.164959375 podStartE2EDuration="45.671117023s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="2025-11-24 12:44:30.103460086 +0000 UTC m=+1002.460974228" lastFinishedPulling="2025-11-24 12:45:05.609617734 +0000 UTC m=+1037.967131876" observedRunningTime="2025-11-24 12:45:12.655649316 +0000 UTC m=+1045.013163448" watchObservedRunningTime="2025-11-24 12:45:12.671117023 +0000 UTC m=+1045.028631175" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.672954 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerStarted","Data":"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934"} Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.690945 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=18.98694701 podStartE2EDuration="40.69092644s" podCreationTimestamp="2025-11-24 12:44:32 +0000 UTC" firstStartedPulling="2025-11-24 12:44:33.827028385 +0000 UTC m=+1006.184542527" lastFinishedPulling="2025-11-24 12:44:55.531007815 +0000 UTC m=+1027.888521957" observedRunningTime="2025-11-24 12:45:12.681477589 +0000 UTC m=+1045.038991731" watchObservedRunningTime="2025-11-24 12:45:12.69092644 +0000 UTC m=+1045.048440582" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.933409 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.933466 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Nov 24 12:45:12 crc kubenswrapper[4756]: I1124 12:45:12.947459 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.000226 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.021699 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.689063 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585c6478b8-gsbzg" event={"ID":"6ae02ece-f457-4943-92fe-9569b5083f41","Type":"ContainerStarted","Data":"0735ceebbc47b8d07479093c986ddcd87d6011d6452676ab55f116acde16109c"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.694076 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerStarted","Data":"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.696789 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerStarted","Data":"9055e289857069bf82a30011465a6844f89ab4e18f7414413d0ccb0bb3cf7c66"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.697955 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.706411 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerStarted","Data":"83264baafdec5f896f80ba132bef8428f337228e6f1141cf6f1da54db08c9038"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.710678 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" event={"ID":"f48f5095-0233-463e-b8ce-7bf7bf6e51e3","Type":"ContainerStarted","Data":"1910b17675efe7e7dabaecf04543bc2360c4ac17979ddc9d6b8af5a3678b4248"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.726971 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75ff655f7b-j2rvr" podStartSLOduration=6.726944968 podStartE2EDuration="6.726944968s" podCreationTimestamp="2025-11-24 12:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:13.716073588 +0000 UTC m=+1046.073587730" watchObservedRunningTime="2025-11-24 12:45:13.726944968 +0000 UTC m=+1046.084459110" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.740930 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerStarted","Data":"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.745152 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b4ddfdbf7-zd9s5" event={"ID":"d2f5b7c5-30dd-4145-a18a-fe929e4d660a","Type":"ContainerStarted","Data":"cb4c3aafc78733869eb97ed53028005a9e9f2a07ebc96c34ee17647a3f1b02df"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.746414 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.751008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c84e5d83-17b7-47c7-9952-9e6942940b2a","Type":"ContainerStarted","Data":"74fb73e4fdf7f61c61f4f8f21a50b41265b62e33e026037bf1a225d2bd1fbe0b"} Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.751045 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.751922 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.775499 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b4ddfdbf7-zd9s5" podStartSLOduration=3.775483098 podStartE2EDuration="3.775483098s" podCreationTimestamp="2025-11-24 12:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:13.769431971 +0000 UTC m=+1046.126946123" watchObservedRunningTime="2025-11-24 12:45:13.775483098 +0000 UTC m=+1046.132997240" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.790927 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.790909374 podStartE2EDuration="6.790909374s" podCreationTimestamp="2025-11-24 12:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:13.790066631 +0000 UTC m=+1046.147580773" watchObservedRunningTime="2025-11-24 12:45:13.790909374 +0000 UTC m=+1046.148423516" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.808493 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Nov 24 12:45:13 crc kubenswrapper[4756]: I1124 12:45:13.860310 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.292610 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.355257 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume\") pod \"6d2fe035-f2da-4a24-9796-1bdeb6198091\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.355434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24sqp\" (UniqueName: \"kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp\") pod \"6d2fe035-f2da-4a24-9796-1bdeb6198091\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.355596 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume\") pod \"6d2fe035-f2da-4a24-9796-1bdeb6198091\" (UID: \"6d2fe035-f2da-4a24-9796-1bdeb6198091\") " Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.357051 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d2fe035-f2da-4a24-9796-1bdeb6198091" (UID: "6d2fe035-f2da-4a24-9796-1bdeb6198091"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.364370 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d2fe035-f2da-4a24-9796-1bdeb6198091" (UID: "6d2fe035-f2da-4a24-9796-1bdeb6198091"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.376446 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp" (OuterVolumeSpecName: "kube-api-access-24sqp") pod "6d2fe035-f2da-4a24-9796-1bdeb6198091" (UID: "6d2fe035-f2da-4a24-9796-1bdeb6198091"). InnerVolumeSpecName "kube-api-access-24sqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.458604 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24sqp\" (UniqueName: \"kubernetes.io/projected/6d2fe035-f2da-4a24-9796-1bdeb6198091-kube-api-access-24sqp\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.458924 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d2fe035-f2da-4a24-9796-1bdeb6198091-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.458934 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d2fe035-f2da-4a24-9796-1bdeb6198091-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.766593 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.767184 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl" event={"ID":"6d2fe035-f2da-4a24-9796-1bdeb6198091","Type":"ContainerDied","Data":"edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035"} Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.767205 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edee6fb33fb4e398beeb17bb6a693b6fef2f9124a5544de9c98770ad34bb9035" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.776122 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerStarted","Data":"d4210a5e297caab6fdf04b86fae8401d9ffa8250dd43be159b9b134b31037963"} Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.822800 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b958b5cb8-lff28" podStartSLOduration=35.376385061 podStartE2EDuration="38.822778937s" podCreationTimestamp="2025-11-24 12:44:36 +0000 UTC" firstStartedPulling="2025-11-24 12:45:08.382936784 +0000 UTC m=+1040.740450926" lastFinishedPulling="2025-11-24 12:45:11.82933066 +0000 UTC m=+1044.186844802" observedRunningTime="2025-11-24 12:45:14.817242115 +0000 UTC m=+1047.174756267" watchObservedRunningTime="2025-11-24 12:45:14.822778937 +0000 UTC m=+1047.180293079" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.929086 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-585c6478b8-gsbzg" podStartSLOduration=36.358949254 podStartE2EDuration="38.929058972s" podCreationTimestamp="2025-11-24 12:44:36 +0000 UTC" firstStartedPulling="2025-11-24 12:45:07.630501727 +0000 UTC m=+1039.988015869" lastFinishedPulling="2025-11-24 12:45:10.200611445 +0000 UTC m=+1042.558125587" observedRunningTime="2025-11-24 12:45:14.86090451 +0000 UTC m=+1047.218418672" watchObservedRunningTime="2025-11-24 12:45:14.929058972 +0000 UTC m=+1047.286573114" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.957424 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" podStartSLOduration=7.957396825 podStartE2EDuration="7.957396825s" podCreationTimestamp="2025-11-24 12:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:14.907555558 +0000 UTC m=+1047.265069700" watchObservedRunningTime="2025-11-24 12:45:14.957396825 +0000 UTC m=+1047.314910967" Nov 24 12:45:14 crc kubenswrapper[4756]: I1124 12:45:14.974752 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.974729723 podStartE2EDuration="18.974729723s" podCreationTimestamp="2025-11-24 12:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:14.939099599 +0000 UTC m=+1047.296613741" watchObservedRunningTime="2025-11-24 12:45:14.974729723 +0000 UTC m=+1047.332243905" Nov 24 12:45:15 crc kubenswrapper[4756]: I1124 12:45:15.013632 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.013607397 podStartE2EDuration="19.013607397s" podCreationTimestamp="2025-11-24 12:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:14.966029253 +0000 UTC m=+1047.323543415" watchObservedRunningTime="2025-11-24 12:45:15.013607397 +0000 UTC m=+1047.371121539" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.743799 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.744146 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.755950 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.756003 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.816064 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.816966 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.817967 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.818575 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.828115 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.838064 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.968632 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:45:16 crc kubenswrapper[4756]: I1124 12:45:16.969266 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:45:17 crc kubenswrapper[4756]: I1124 12:45:17.065282 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:45:17 crc kubenswrapper[4756]: I1124 12:45:17.065341 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:45:17 crc kubenswrapper[4756]: I1124 12:45:17.289216 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 12:45:17 crc kubenswrapper[4756]: I1124 12:45:17.810761 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:45:17 crc kubenswrapper[4756]: I1124 12:45:17.810812 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.008255 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.008311 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.050341 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.077683 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.078486 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.161431 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.161675 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="dnsmasq-dns" containerID="cri-o://db7f73c5e8ac9272839f7353c43a2512123a455adc864d82f1d8e18c0b8ca000" gracePeriod=10 Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.442634 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.824831 4756 generic.go:334] "Generic (PLEG): container finished" podID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerID="db7f73c5e8ac9272839f7353c43a2512123a455adc864d82f1d8e18c0b8ca000" exitCode=0 Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.824946 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" event={"ID":"27a19519-b508-46b4-b8e7-87cf03d7c6bd","Type":"ContainerDied","Data":"db7f73c5e8ac9272839f7353c43a2512123a455adc864d82f1d8e18c0b8ca000"} Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.826957 4756 generic.go:334] "Generic (PLEG): container finished" podID="3efa48e0-4cac-4152-8920-74324d606778" containerID="0e8549083469d6c6c9629a847ea97c5f2f5c3c8a75454bc7a7442d5004766ab6" exitCode=0 Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.827092 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kphn" event={"ID":"3efa48e0-4cac-4152-8920-74324d606778","Type":"ContainerDied","Data":"0e8549083469d6c6c9629a847ea97c5f2f5c3c8a75454bc7a7442d5004766ab6"} Nov 24 12:45:18 crc kubenswrapper[4756]: I1124 12:45:18.858455 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Nov 24 12:45:19 crc kubenswrapper[4756]: I1124 12:45:19.859066 4756 generic.go:334] "Generic (PLEG): container finished" podID="ee404871-3e83-4fa1-a773-df0c95222c32" containerID="635fe80f24e6a7a1873834ce53a8e72cd5fc4fc981da9a13a467ad87cd828a3a" exitCode=0 Nov 24 12:45:19 crc kubenswrapper[4756]: I1124 12:45:19.859345 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9bg6g" event={"ID":"ee404871-3e83-4fa1-a773-df0c95222c32","Type":"ContainerDied","Data":"635fe80f24e6a7a1873834ce53a8e72cd5fc4fc981da9a13a467ad87cd828a3a"} Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.417237 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.453013 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.453178 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.455020 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.670261 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.895112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kphn" event={"ID":"3efa48e0-4cac-4152-8920-74324d606778","Type":"ContainerDied","Data":"f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01"} Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.895276 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5842d04c712c9453c8973efde04ce799dbfb7d334a321cd4a1e314c4c058a01" Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.900179 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9bg6g" event={"ID":"ee404871-3e83-4fa1-a773-df0c95222c32","Type":"ContainerDied","Data":"dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6"} Nov 24 12:45:22 crc kubenswrapper[4756]: I1124 12:45:22.900217 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5fdb79070a105144059bf4c68d7f99fa4e868b40c501313eb0887df86e86c6" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.243354 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276138 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276213 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276281 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4tnh\" (UniqueName: \"kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276354 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276599 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.276644 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts\") pod \"3efa48e0-4cac-4152-8920-74324d606778\" (UID: \"3efa48e0-4cac-4152-8920-74324d606778\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.289214 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh" (OuterVolumeSpecName: "kube-api-access-m4tnh") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "kube-api-access-m4tnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.292585 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.292737 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts" (OuterVolumeSpecName: "scripts") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.307509 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9bg6g" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.319506 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.330048 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.340348 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378177 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ctnz\" (UniqueName: \"kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378240 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kk8k\" (UniqueName: \"kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k\") pod \"ee404871-3e83-4fa1-a773-df0c95222c32\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378309 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378357 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378401 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data\") pod \"ee404871-3e83-4fa1-a773-df0c95222c32\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378483 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb\") pod \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\" (UID: \"27a19519-b508-46b4-b8e7-87cf03d7c6bd\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378513 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts\") pod \"ee404871-3e83-4fa1-a773-df0c95222c32\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378552 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs\") pod \"ee404871-3e83-4fa1-a773-df0c95222c32\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378573 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle\") pod \"ee404871-3e83-4fa1-a773-df0c95222c32\" (UID: \"ee404871-3e83-4fa1-a773-df0c95222c32\") " Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378965 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378977 4756 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378986 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.378996 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4tnh\" (UniqueName: \"kubernetes.io/projected/3efa48e0-4cac-4152-8920-74324d606778-kube-api-access-m4tnh\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.379005 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.385551 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k" (OuterVolumeSpecName: "kube-api-access-7kk8k") pod "ee404871-3e83-4fa1-a773-df0c95222c32" (UID: "ee404871-3e83-4fa1-a773-df0c95222c32"). InnerVolumeSpecName "kube-api-access-7kk8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.397688 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs" (OuterVolumeSpecName: "logs") pod "ee404871-3e83-4fa1-a773-df0c95222c32" (UID: "ee404871-3e83-4fa1-a773-df0c95222c32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.452364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data" (OuterVolumeSpecName: "config-data") pod "3efa48e0-4cac-4152-8920-74324d606778" (UID: "3efa48e0-4cac-4152-8920-74324d606778"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.452863 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz" (OuterVolumeSpecName: "kube-api-access-4ctnz") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "kube-api-access-4ctnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.454687 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts" (OuterVolumeSpecName: "scripts") pod "ee404871-3e83-4fa1-a773-df0c95222c32" (UID: "ee404871-3e83-4fa1-a773-df0c95222c32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.483865 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee404871-3e83-4fa1-a773-df0c95222c32-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.483913 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ctnz\" (UniqueName: \"kubernetes.io/projected/27a19519-b508-46b4-b8e7-87cf03d7c6bd-kube-api-access-4ctnz\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.483925 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kk8k\" (UniqueName: \"kubernetes.io/projected/ee404871-3e83-4fa1-a773-df0c95222c32-kube-api-access-7kk8k\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.483935 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efa48e0-4cac-4152-8920-74324d606778-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.483945 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.600257 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.690838 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.773772 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.788333 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee404871-3e83-4fa1-a773-df0c95222c32" (UID: "ee404871-3e83-4fa1-a773-df0c95222c32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.804751 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.805049 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.839826 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.851601 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.863323 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data" (OuterVolumeSpecName: "config-data") pod "ee404871-3e83-4fa1-a773-df0c95222c32" (UID: "ee404871-3e83-4fa1-a773-df0c95222c32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.863733 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config" (OuterVolumeSpecName: "config") pod "27a19519-b508-46b4-b8e7-87cf03d7c6bd" (UID: "27a19519-b508-46b4-b8e7-87cf03d7c6bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.907787 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.907839 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee404871-3e83-4fa1-a773-df0c95222c32-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.907866 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.907878 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27a19519-b508-46b4-b8e7-87cf03d7c6bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:23 crc kubenswrapper[4756]: I1124 12:45:23.999486 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wz84p" event={"ID":"8f485ab9-01fd-4640-833e-8ee586798f2e","Type":"ContainerStarted","Data":"2530f55973ddededcba23f59504f547a30369490a802a58d58ad6857f48ea262"} Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.019514 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerStarted","Data":"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac"} Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.032229 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9bg6g" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.032502 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.032510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-5lxht" event={"ID":"27a19519-b508-46b4-b8e7-87cf03d7c6bd","Type":"ContainerDied","Data":"f73cf45b668477439a39f02bab948fdc7f97f01c6545994712e880d845e2a11d"} Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.032556 4756 scope.go:117] "RemoveContainer" containerID="db7f73c5e8ac9272839f7353c43a2512123a455adc864d82f1d8e18c0b8ca000" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.032814 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kphn" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.039280 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wz84p" podStartSLOduration=4.63078953 podStartE2EDuration="57.039258231s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="2025-11-24 12:44:30.609566911 +0000 UTC m=+1002.967081053" lastFinishedPulling="2025-11-24 12:45:23.018035612 +0000 UTC m=+1055.375549754" observedRunningTime="2025-11-24 12:45:24.017545112 +0000 UTC m=+1056.375059274" watchObservedRunningTime="2025-11-24 12:45:24.039258231 +0000 UTC m=+1056.396772373" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.130353 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.145818 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-5lxht"] Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.157145 4756 scope.go:117] "RemoveContainer" containerID="b93ca14c435470dfd1d7f45a8878d0cb03e05b40c8342541a344e8fe7e849667" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.419284 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5995df89cc-sxcgq"] Nov 24 12:45:24 crc kubenswrapper[4756]: E1124 12:45:24.420022 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2fe035-f2da-4a24-9796-1bdeb6198091" containerName="collect-profiles" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.420046 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2fe035-f2da-4a24-9796-1bdeb6198091" containerName="collect-profiles" Nov 24 12:45:24 crc kubenswrapper[4756]: E1124 12:45:24.420074 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="dnsmasq-dns" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.420081 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="dnsmasq-dns" Nov 24 12:45:24 crc kubenswrapper[4756]: E1124 12:45:24.420098 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="init" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.420104 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="init" Nov 24 12:45:24 crc kubenswrapper[4756]: E1124 12:45:24.420113 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee404871-3e83-4fa1-a773-df0c95222c32" containerName="placement-db-sync" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.420119 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee404871-3e83-4fa1-a773-df0c95222c32" containerName="placement-db-sync" Nov 24 12:45:24 crc kubenswrapper[4756]: E1124 12:45:24.420128 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efa48e0-4cac-4152-8920-74324d606778" containerName="keystone-bootstrap" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.420143 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efa48e0-4cac-4152-8920-74324d606778" containerName="keystone-bootstrap" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.426715 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee404871-3e83-4fa1-a773-df0c95222c32" containerName="placement-db-sync" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.426771 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" containerName="dnsmasq-dns" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.426786 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2fe035-f2da-4a24-9796-1bdeb6198091" containerName="collect-profiles" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.426814 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efa48e0-4cac-4152-8920-74324d606778" containerName="keystone-bootstrap" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.427529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.430628 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.430875 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.431029 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.431195 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.431439 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.431544 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2m2wq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.434463 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5995df89cc-sxcgq"] Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.451673 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f8c9745c6-b4wdj"] Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.468306 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.479065 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.480070 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.484646 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.484646 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.490432 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzftr" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.491115 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a19519-b508-46b4-b8e7-87cf03d7c6bd" path="/var/lib/kubelet/pods/27a19519-b508-46b4-b8e7-87cf03d7c6bd/volumes" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.491786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f8c9745c6-b4wdj"] Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.522640 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-combined-ca-bundle\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.522951 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-config-data\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523048 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-credential-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523117 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-logs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523210 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmgs\" (UniqueName: \"kubernetes.io/projected/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-kube-api-access-8gmgs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523288 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-public-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523366 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-scripts\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523458 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-scripts\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523544 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcrx\" (UniqueName: \"kubernetes.io/projected/77785a15-6850-4685-8fbd-b129153baa32-kube-api-access-mlcrx\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523607 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-public-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523677 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-config-data\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523813 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-internal-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523888 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-combined-ca-bundle\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.523969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-fernet-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.524091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-internal-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625733 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-credential-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625776 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-logs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gmgs\" (UniqueName: \"kubernetes.io/projected/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-kube-api-access-8gmgs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625831 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-public-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625852 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-scripts\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-scripts\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625934 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcrx\" (UniqueName: \"kubernetes.io/projected/77785a15-6850-4685-8fbd-b129153baa32-kube-api-access-mlcrx\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-public-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.625985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-config-data\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626040 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-internal-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-combined-ca-bundle\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626078 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-fernet-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626103 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-internal-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626193 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-combined-ca-bundle\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-config-data\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.626374 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-logs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.635234 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-internal-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636239 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-config-data\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636373 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-combined-ca-bundle\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636451 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-config-data\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636509 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-public-tls-certs\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636593 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-public-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.636705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-internal-tls-certs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.637653 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-scripts\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.637867 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-credential-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.638925 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-combined-ca-bundle\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.640211 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77785a15-6850-4685-8fbd-b129153baa32-fernet-keys\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.651725 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-scripts\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.655797 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gmgs\" (UniqueName: \"kubernetes.io/projected/b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7-kube-api-access-8gmgs\") pod \"placement-5f8c9745c6-b4wdj\" (UID: \"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7\") " pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.661018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcrx\" (UniqueName: \"kubernetes.io/projected/77785a15-6850-4685-8fbd-b129153baa32-kube-api-access-mlcrx\") pod \"keystone-5995df89cc-sxcgq\" (UID: \"77785a15-6850-4685-8fbd-b129153baa32\") " pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.753906 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:24 crc kubenswrapper[4756]: I1124 12:45:24.790613 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:25 crc kubenswrapper[4756]: I1124 12:45:25.077620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vbnrc" event={"ID":"e5e34263-c415-4300-a110-ab2ad6787566","Type":"ContainerStarted","Data":"45f7b658dcfd3d76440a1bb287296bee264cefd411037fb9f7c5fa6f44b7b6b3"} Nov 24 12:45:25 crc kubenswrapper[4756]: I1124 12:45:25.105913 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vbnrc" podStartSLOduration=4.054367624 podStartE2EDuration="58.105890715s" podCreationTimestamp="2025-11-24 12:44:27 +0000 UTC" firstStartedPulling="2025-11-24 12:44:28.965689288 +0000 UTC m=+1001.323203430" lastFinishedPulling="2025-11-24 12:45:23.017212379 +0000 UTC m=+1055.374726521" observedRunningTime="2025-11-24 12:45:25.101617437 +0000 UTC m=+1057.459131579" watchObservedRunningTime="2025-11-24 12:45:25.105890715 +0000 UTC m=+1057.463404857" Nov 24 12:45:25 crc kubenswrapper[4756]: I1124 12:45:25.344498 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5995df89cc-sxcgq"] Nov 24 12:45:25 crc kubenswrapper[4756]: W1124 12:45:25.414092 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77785a15_6850_4685_8fbd_b129153baa32.slice/crio-95dc78a44dfd0fd2e57315d4cb58d6d479d89533a7b05c237af825df53e83a90 WatchSource:0}: Error finding container 95dc78a44dfd0fd2e57315d4cb58d6d479d89533a7b05c237af825df53e83a90: Status 404 returned error can't find the container with id 95dc78a44dfd0fd2e57315d4cb58d6d479d89533a7b05c237af825df53e83a90 Nov 24 12:45:25 crc kubenswrapper[4756]: I1124 12:45:25.585588 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f8c9745c6-b4wdj"] Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.091091 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8c9745c6-b4wdj" event={"ID":"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7","Type":"ContainerStarted","Data":"fad64758f13395d6f5732f1caa25547d69bd9cc0844a865a5cd20d12e571fb35"} Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.091462 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8c9745c6-b4wdj" event={"ID":"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7","Type":"ContainerStarted","Data":"a42c8f22733c879c329eb8b197a6a67f671abeaed830192af970174336527e20"} Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.095306 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5995df89cc-sxcgq" event={"ID":"77785a15-6850-4685-8fbd-b129153baa32","Type":"ContainerStarted","Data":"74cb0c2c6938d0c866d8d479d1085aac195bdbc2a6964e03f3be518a6771a12f"} Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.095352 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5995df89cc-sxcgq" event={"ID":"77785a15-6850-4685-8fbd-b129153baa32","Type":"ContainerStarted","Data":"95dc78a44dfd0fd2e57315d4cb58d6d479d89533a7b05c237af825df53e83a90"} Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.096477 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.136944 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5995df89cc-sxcgq" podStartSLOduration=2.136927985 podStartE2EDuration="2.136927985s" podCreationTimestamp="2025-11-24 12:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:26.132087561 +0000 UTC m=+1058.489601703" watchObservedRunningTime="2025-11-24 12:45:26.136927985 +0000 UTC m=+1058.494442127" Nov 24 12:45:26 crc kubenswrapper[4756]: I1124 12:45:26.970283 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Nov 24 12:45:27 crc kubenswrapper[4756]: I1124 12:45:27.068990 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-585c6478b8-gsbzg" podUID="6ae02ece-f457-4943-92fe-9569b5083f41" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 12:45:27 crc kubenswrapper[4756]: I1124 12:45:27.113566 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f8c9745c6-b4wdj" event={"ID":"b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7","Type":"ContainerStarted","Data":"d0467564eab6cb6d996b168328e5c77effa8a3b10511100e5f7f3d8cc9b777c8"} Nov 24 12:45:27 crc kubenswrapper[4756]: I1124 12:45:27.113881 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:27 crc kubenswrapper[4756]: I1124 12:45:27.113904 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:27 crc kubenswrapper[4756]: I1124 12:45:27.135773 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f8c9745c6-b4wdj" podStartSLOduration=3.135753375 podStartE2EDuration="3.135753375s" podCreationTimestamp="2025-11-24 12:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:27.135283112 +0000 UTC m=+1059.492797264" watchObservedRunningTime="2025-11-24 12:45:27.135753375 +0000 UTC m=+1059.493267517" Nov 24 12:45:34 crc kubenswrapper[4756]: E1124 12:45:34.156128 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.185582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerStarted","Data":"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a"} Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.185675 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="ceilometer-notification-agent" containerID="cri-o://a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47" gracePeriod=30 Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.185696 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.185721 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="sg-core" containerID="cri-o://3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac" gracePeriod=30 Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.185899 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="proxy-httpd" containerID="cri-o://48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a" gracePeriod=30 Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.189743 4756 generic.go:334] "Generic (PLEG): container finished" podID="8f485ab9-01fd-4640-833e-8ee586798f2e" containerID="2530f55973ddededcba23f59504f547a30369490a802a58d58ad6857f48ea262" exitCode=0 Nov 24 12:45:34 crc kubenswrapper[4756]: I1124 12:45:34.189792 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wz84p" event={"ID":"8f485ab9-01fd-4640-833e-8ee586798f2e","Type":"ContainerDied","Data":"2530f55973ddededcba23f59504f547a30369490a802a58d58ad6857f48ea262"} Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.207820 4756 generic.go:334] "Generic (PLEG): container finished" podID="03921298-d6d8-404c-9ee5-c5101a92892e" containerID="3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac" exitCode=2 Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.207860 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerDied","Data":"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac"} Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.210795 4756 generic.go:334] "Generic (PLEG): container finished" podID="e5e34263-c415-4300-a110-ab2ad6787566" containerID="45f7b658dcfd3d76440a1bb287296bee264cefd411037fb9f7c5fa6f44b7b6b3" exitCode=0 Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.210827 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vbnrc" event={"ID":"e5e34263-c415-4300-a110-ab2ad6787566","Type":"ContainerDied","Data":"45f7b658dcfd3d76440a1bb287296bee264cefd411037fb9f7c5fa6f44b7b6b3"} Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.631629 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wz84p" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.714825 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hz4g\" (UniqueName: \"kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g\") pod \"8f485ab9-01fd-4640-833e-8ee586798f2e\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.715090 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data\") pod \"8f485ab9-01fd-4640-833e-8ee586798f2e\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.715260 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle\") pod \"8f485ab9-01fd-4640-833e-8ee586798f2e\" (UID: \"8f485ab9-01fd-4640-833e-8ee586798f2e\") " Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.724221 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g" (OuterVolumeSpecName: "kube-api-access-8hz4g") pod "8f485ab9-01fd-4640-833e-8ee586798f2e" (UID: "8f485ab9-01fd-4640-833e-8ee586798f2e"). InnerVolumeSpecName "kube-api-access-8hz4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.724382 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f485ab9-01fd-4640-833e-8ee586798f2e" (UID: "8f485ab9-01fd-4640-833e-8ee586798f2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.748113 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f485ab9-01fd-4640-833e-8ee586798f2e" (UID: "8f485ab9-01fd-4640-833e-8ee586798f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.817048 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.817089 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hz4g\" (UniqueName: \"kubernetes.io/projected/8f485ab9-01fd-4640-833e-8ee586798f2e-kube-api-access-8hz4g\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:35 crc kubenswrapper[4756]: I1124 12:45:35.817104 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f485ab9-01fd-4640-833e-8ee586798f2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.226410 4756 generic.go:334] "Generic (PLEG): container finished" podID="03921298-d6d8-404c-9ee5-c5101a92892e" containerID="a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47" exitCode=0 Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.226508 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerDied","Data":"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47"} Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.229902 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wz84p" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.229897 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wz84p" event={"ID":"8f485ab9-01fd-4640-833e-8ee586798f2e","Type":"ContainerDied","Data":"b0f578169c4d766aca6ba7ea3239b021b7736ffb79b71939d92c21adc4702e1a"} Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.229954 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f578169c4d766aca6ba7ea3239b021b7736ffb79b71939d92c21adc4702e1a" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.501474 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-568b98fff9-ngjr7"] Nov 24 12:45:36 crc kubenswrapper[4756]: E1124 12:45:36.504073 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" containerName="barbican-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.504105 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" containerName="barbican-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.504577 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" containerName="barbican-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.512613 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.520869 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.521989 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.531798 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z5hzj" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.532293 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-568b98fff9-ngjr7"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.532906 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab695f1-c645-42dc-be38-2935fbe4977d-logs\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.532940 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-combined-ca-bundle\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.532979 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.533003 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data-custom\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.533029 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d846l\" (UniqueName: \"kubernetes.io/projected/2ab695f1-c645-42dc-be38-2935fbe4977d-kube-api-access-d846l\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.594060 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77956fdfb6-wlggx"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.597061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.600538 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.626251 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77956fdfb6-wlggx"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.634978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635129 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn75d\" (UniqueName: \"kubernetes.io/projected/fb164396-9603-40ac-a47b-5b8feb1be35c-kube-api-access-fn75d\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635247 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb164396-9603-40ac-a47b-5b8feb1be35c-logs\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635428 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab695f1-c645-42dc-be38-2935fbe4977d-logs\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635533 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-combined-ca-bundle\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635731 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-combined-ca-bundle\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635827 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.635928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data-custom\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.636028 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d846l\" (UniqueName: \"kubernetes.io/projected/2ab695f1-c645-42dc-be38-2935fbe4977d-kube-api-access-d846l\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.636328 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data-custom\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.636898 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab695f1-c645-42dc-be38-2935fbe4977d-logs\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.644879 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data-custom\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.645743 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-combined-ca-bundle\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.647018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab695f1-c645-42dc-be38-2935fbe4977d-config-data\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.672014 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d846l\" (UniqueName: \"kubernetes.io/projected/2ab695f1-c645-42dc-be38-2935fbe4977d-kube-api-access-d846l\") pod \"barbican-worker-568b98fff9-ngjr7\" (UID: \"2ab695f1-c645-42dc-be38-2935fbe4977d\") " pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.704845 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.707015 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.730481 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.738016 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb164396-9603-40ac-a47b-5b8feb1be35c-logs\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.738400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-combined-ca-bundle\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.738759 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.738870 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7glf\" (UniqueName: \"kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.738970 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739077 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739151 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739301 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739436 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data-custom\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739566 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.739682 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn75d\" (UniqueName: \"kubernetes.io/projected/fb164396-9603-40ac-a47b-5b8feb1be35c-kube-api-access-fn75d\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.740608 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb164396-9603-40ac-a47b-5b8feb1be35c-logs\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.756434 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.757278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-combined-ca-bundle\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.757902 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb164396-9603-40ac-a47b-5b8feb1be35c-config-data-custom\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.784364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn75d\" (UniqueName: \"kubernetes.io/projected/fb164396-9603-40ac-a47b-5b8feb1be35c-kube-api-access-fn75d\") pod \"barbican-keystone-listener-77956fdfb6-wlggx\" (UID: \"fb164396-9603-40ac-a47b-5b8feb1be35c\") " pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.840890 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.841262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.842530 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.842488 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.842456 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.843106 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.843464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.844037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.844372 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.845004 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7glf\" (UniqueName: \"kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.844949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.876791 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.877481 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7glf\" (UniqueName: \"kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf\") pod \"dnsmasq-dns-848cf88cfc-5vjv2\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.891628 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-568b98fff9-ngjr7" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.927614 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.944051 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.945599 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:45:36 crc kubenswrapper[4756]: E1124 12:45:36.956359 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e34263-c415-4300-a110-ab2ad6787566" containerName="cinder-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.956399 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e34263-c415-4300-a110-ab2ad6787566" containerName="cinder-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.956649 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e34263-c415-4300-a110-ab2ad6787566" containerName="cinder-db-sync" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.958474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.960595 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.969149 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.973974 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkbq\" (UniqueName: \"kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992367 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992395 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992524 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.992596 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data\") pod \"e5e34263-c415-4300-a110-ab2ad6787566\" (UID: \"e5e34263-c415-4300-a110-ab2ad6787566\") " Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.993577 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.996362 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:36 crc kubenswrapper[4756]: I1124 12:45:36.999321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts" (OuterVolumeSpecName: "scripts") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.010377 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq" (OuterVolumeSpecName: "kube-api-access-mzkbq") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "kube-api-access-mzkbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.047341 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.066836 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-585c6478b8-gsbzg" podUID="6ae02ece-f457-4943-92fe-9569b5083f41" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.079312 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data" (OuterVolumeSpecName: "config-data") pod "e5e34263-c415-4300-a110-ab2ad6787566" (UID: "e5e34263-c415-4300-a110-ab2ad6787566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.096573 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblsb\" (UniqueName: \"kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.096828 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.097088 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.100753 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.101612 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.101939 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.101978 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.101993 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzkbq\" (UniqueName: \"kubernetes.io/projected/e5e34263-c415-4300-a110-ab2ad6787566-kube-api-access-mzkbq\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.102007 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e34263-c415-4300-a110-ab2ad6787566-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.102018 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.102031 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e34263-c415-4300-a110-ab2ad6787566-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.206326 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.206824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.206882 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.206902 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.206958 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblsb\" (UniqueName: \"kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.207495 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.226019 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.226650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.227109 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.240684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblsb\" (UniqueName: \"kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb\") pod \"barbican-api-74996ffc74-hpkv2\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.267007 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vbnrc" event={"ID":"e5e34263-c415-4300-a110-ab2ad6787566","Type":"ContainerDied","Data":"0abf8181c54af0fbe088ac540bbd2e1f80fae1e98dab2ca5817bf548b318cd48"} Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.267038 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abf8181c54af0fbe088ac540bbd2e1f80fae1e98dab2ca5817bf548b318cd48" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.267091 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vbnrc" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.289069 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.502227 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.504855 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.509046 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.509448 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-glrkw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.509059 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.513256 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.532921 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.597483 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77956fdfb6-wlggx"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614339 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614421 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8vw5\" (UniqueName: \"kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614484 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614510 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614559 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.614586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.635173 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.685488 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.687389 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.714887 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731153 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731287 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731342 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731591 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731720 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8vw5\" (UniqueName: \"kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.731990 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.742607 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.750842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.756900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.758768 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.777652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8vw5\" (UniqueName: \"kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5\") pod \"cinder-scheduler-0\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.787280 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-568b98fff9-ngjr7"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdzpp\" (UniqueName: \"kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840800 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840835 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.840899 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.870861 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.923278 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.925110 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.929796 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.948677 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.949371 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdzpp\" (UniqueName: \"kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.952706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.985901 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.985996 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.986090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.986144 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:37 crc kubenswrapper[4756]: I1124 12:45:37.991330 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.006682 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.007482 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdzpp\" (UniqueName: \"kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.007991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.008465 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config\") pod \"dnsmasq-dns-6578955fd5-7nxdw\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.021178 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.048010 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088439 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088513 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088566 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088676 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkd9\" (UniqueName: \"kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088692 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.088722 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.170636 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.190128 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.190319 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.194085 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.195013 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.195416 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkd9\" (UniqueName: \"kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.195520 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.195662 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.195960 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.191035 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.209972 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.210046 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.211558 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.211907 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.219061 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.243768 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.254926 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkd9\" (UniqueName: \"kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9\") pod \"cinder-api-0\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.256181 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:38 crc kubenswrapper[4756]: W1124 12:45:38.271390 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ac68bd7_a5af_4908_8249_bfa5ef938acc.slice/crio-9b091de439eae88567d9bd8980e475c7bf4e714f9bc495f0968d2a1e7750afc0 WatchSource:0}: Error finding container 9b091de439eae88567d9bd8980e475c7bf4e714f9bc495f0968d2a1e7750afc0: Status 404 returned error can't find the container with id 9b091de439eae88567d9bd8980e475c7bf4e714f9bc495f0968d2a1e7750afc0 Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.303572 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" event={"ID":"fb164396-9603-40ac-a47b-5b8feb1be35c","Type":"ContainerStarted","Data":"d43e7e2f3602c8cb87b0dd5192acd62bb290441105bbc585feb68440dcd6723f"} Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.312655 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" event={"ID":"bcb448fa-f259-458b-ba2c-07712e71af0d","Type":"ContainerStarted","Data":"f72e91fc93bb9ef17a35807598330fd3417d50d008b407770d76030ef1e0de03"} Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.319719 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-568b98fff9-ngjr7" event={"ID":"2ab695f1-c645-42dc-be38-2935fbe4977d","Type":"ContainerStarted","Data":"ce91aadfd999db5a8273fe2adb9c530ead8a12e9cf339b52e201c73ccb93b18c"} Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.845751 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:38 crc kubenswrapper[4756]: I1124 12:45:38.966206 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.106982 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.347145 4756 generic.go:334] "Generic (PLEG): container finished" podID="bcb448fa-f259-458b-ba2c-07712e71af0d" containerID="4a6887593749a922af56495f4bf2974b3eea1cdc80c53806ed39f00f3da46149" exitCode=0 Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.347245 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" event={"ID":"bcb448fa-f259-458b-ba2c-07712e71af0d","Type":"ContainerDied","Data":"4a6887593749a922af56495f4bf2974b3eea1cdc80c53806ed39f00f3da46149"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.355537 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerStarted","Data":"c20de72026fbace770490da1e3b41e7e67946a682d028ee2d4d7e2b2030b66a0"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.360438 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" event={"ID":"fe242a81-41af-42a9-8934-34f7d0ef485b","Type":"ContainerStarted","Data":"0577e5d302338093fb07aafe3066b46b7720f6a7b4fe614629ff59e8da0f56e2"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.377875 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerStarted","Data":"cdad19809a7c02aa62180047b38d02223334a5cba447dbf357aa3bc48ab26742"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.377951 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerStarted","Data":"65fb38212c4d181639670f15a940278b97431518b19615d7f411a29773cec49a"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.377963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerStarted","Data":"9b091de439eae88567d9bd8980e475c7bf4e714f9bc495f0968d2a1e7750afc0"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.378001 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.378632 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.380412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerStarted","Data":"faf1aac5fe2fa3dc4aabcf8f1a7f5e242e690b7fda945cc636e353044c844543"} Nov 24 12:45:39 crc kubenswrapper[4756]: I1124 12:45:39.445758 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74996ffc74-hpkv2" podStartSLOduration=3.445723409 podStartE2EDuration="3.445723409s" podCreationTimestamp="2025-11-24 12:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:39.4044536 +0000 UTC m=+1071.761967852" watchObservedRunningTime="2025-11-24 12:45:39.445723409 +0000 UTC m=+1071.803237551" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.021996 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071134 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071572 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071629 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071665 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071720 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7glf\" (UniqueName: \"kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.071759 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config\") pod \"bcb448fa-f259-458b-ba2c-07712e71af0d\" (UID: \"bcb448fa-f259-458b-ba2c-07712e71af0d\") " Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.080658 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf" (OuterVolumeSpecName: "kube-api-access-k7glf") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "kube-api-access-k7glf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.119428 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config" (OuterVolumeSpecName: "config") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.126004 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.141276 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.141408 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.173931 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.173972 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.173986 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.173998 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7glf\" (UniqueName: \"kubernetes.io/projected/bcb448fa-f259-458b-ba2c-07712e71af0d-kube-api-access-k7glf\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.174012 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.178863 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bcb448fa-f259-458b-ba2c-07712e71af0d" (UID: "bcb448fa-f259-458b-ba2c-07712e71af0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.275706 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bcb448fa-f259-458b-ba2c-07712e71af0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.398588 4756 generic.go:334] "Generic (PLEG): container finished" podID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerID="c65bafe6ef3e1115c1b26acc55ab9dee565a08d3f0572b393c6b6f89476317a2" exitCode=0 Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.398691 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" event={"ID":"fe242a81-41af-42a9-8934-34f7d0ef485b","Type":"ContainerDied","Data":"c65bafe6ef3e1115c1b26acc55ab9dee565a08d3f0572b393c6b6f89476317a2"} Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.401710 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerStarted","Data":"917ad4d0aab31f6f77dfc32d2113c7345e647bbda154dc5735053e071ba7bb32"} Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.404099 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.404702 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5vjv2" event={"ID":"bcb448fa-f259-458b-ba2c-07712e71af0d","Type":"ContainerDied","Data":"f72e91fc93bb9ef17a35807598330fd3417d50d008b407770d76030ef1e0de03"} Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.404752 4756 scope.go:117] "RemoveContainer" containerID="4a6887593749a922af56495f4bf2974b3eea1cdc80c53806ed39f00f3da46149" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.505970 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.519583 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5vjv2"] Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.578859 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b4ddfdbf7-zd9s5" Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.648711 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.649048 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75ff655f7b-j2rvr" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-api" containerID="cri-o://c50b718041377a553045016e4b00c94cdc1e71857eb88178cbdb72ba0348e8ef" gracePeriod=30 Nov 24 12:45:40 crc kubenswrapper[4756]: I1124 12:45:40.649544 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75ff655f7b-j2rvr" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-httpd" containerID="cri-o://9055e289857069bf82a30011465a6844f89ab4e18f7414413d0ccb0bb3cf7c66" gracePeriod=30 Nov 24 12:45:41 crc kubenswrapper[4756]: I1124 12:45:41.253042 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:41 crc kubenswrapper[4756]: I1124 12:45:41.421940 4756 generic.go:334] "Generic (PLEG): container finished" podID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerID="9055e289857069bf82a30011465a6844f89ab4e18f7414413d0ccb0bb3cf7c66" exitCode=0 Nov 24 12:45:41 crc kubenswrapper[4756]: I1124 12:45:41.421995 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerDied","Data":"9055e289857069bf82a30011465a6844f89ab4e18f7414413d0ccb0bb3cf7c66"} Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.450087 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" event={"ID":"fe242a81-41af-42a9-8934-34f7d0ef485b","Type":"ContainerStarted","Data":"c092cf358a395b8ecbcb46bb0d74a39a0f0bf12c49195d48eb378ff6ea515098"} Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.450777 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.453783 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" event={"ID":"fb164396-9603-40ac-a47b-5b8feb1be35c","Type":"ContainerStarted","Data":"bd85c9fb07075d925525aabaa6e4887eba9947fe22eff599e3635271a274642a"} Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.453821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" event={"ID":"fb164396-9603-40ac-a47b-5b8feb1be35c","Type":"ContainerStarted","Data":"2e808f9f0910785047ceb657903da26478515d8d31c8fbaf77de13065ca24ae1"} Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.458377 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-568b98fff9-ngjr7" event={"ID":"2ab695f1-c645-42dc-be38-2935fbe4977d","Type":"ContainerStarted","Data":"dbdee4d24c90ffa6ca642985ba6f57ae09dfc5fbc979c87e1e9b064071e6e1a4"} Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.486602 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" podStartSLOduration=5.486582406 podStartE2EDuration="5.486582406s" podCreationTimestamp="2025-11-24 12:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:42.472992941 +0000 UTC m=+1074.830507083" watchObservedRunningTime="2025-11-24 12:45:42.486582406 +0000 UTC m=+1074.844096548" Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.491969 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb448fa-f259-458b-ba2c-07712e71af0d" path="/var/lib/kubelet/pods/bcb448fa-f259-458b-ba2c-07712e71af0d/volumes" Nov 24 12:45:42 crc kubenswrapper[4756]: I1124 12:45:42.497200 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77956fdfb6-wlggx" podStartSLOduration=2.460459383 podStartE2EDuration="6.497149608s" podCreationTimestamp="2025-11-24 12:45:36 +0000 UTC" firstStartedPulling="2025-11-24 12:45:37.616292953 +0000 UTC m=+1069.973807085" lastFinishedPulling="2025-11-24 12:45:41.652983168 +0000 UTC m=+1074.010497310" observedRunningTime="2025-11-24 12:45:42.491637356 +0000 UTC m=+1074.849151498" watchObservedRunningTime="2025-11-24 12:45:42.497149608 +0000 UTC m=+1074.854663750" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.524230 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerStarted","Data":"d383de0afcbbdecae355a923b106313d4b3d5f5f0a4d664f460b7254e5286b6e"} Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.526009 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.525592 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api" containerID="cri-o://d383de0afcbbdecae355a923b106313d4b3d5f5f0a4d664f460b7254e5286b6e" gracePeriod=30 Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.525288 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api-log" containerID="cri-o://917ad4d0aab31f6f77dfc32d2113c7345e647bbda154dc5735053e071ba7bb32" gracePeriod=30 Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.540534 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-568b98fff9-ngjr7" event={"ID":"2ab695f1-c645-42dc-be38-2935fbe4977d","Type":"ContainerStarted","Data":"ee35837a221a4fef34d45cabc853f3c38c99010e9e9354dc71696c688bea86d3"} Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.551139 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerStarted","Data":"b8fa407aa3649695fc77e71d3aeee572cdb2d4b5de50bcd7431b5abf84316d7b"} Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.551218 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerStarted","Data":"5e7066b6d01c256307d946b8c7b0c038db4c1b7a4fc8a01bfb846fcc949c90b5"} Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.559815 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.559792601 podStartE2EDuration="6.559792601s" podCreationTimestamp="2025-11-24 12:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:43.556103589 +0000 UTC m=+1075.913617741" watchObservedRunningTime="2025-11-24 12:45:43.559792601 +0000 UTC m=+1075.917306743" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.567954 4756 generic.go:334] "Generic (PLEG): container finished" podID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerID="c50b718041377a553045016e4b00c94cdc1e71857eb88178cbdb72ba0348e8ef" exitCode=0 Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.568144 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerDied","Data":"c50b718041377a553045016e4b00c94cdc1e71857eb88178cbdb72ba0348e8ef"} Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.603840 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-568b98fff9-ngjr7" podStartSLOduration=3.633195625 podStartE2EDuration="7.603811206s" podCreationTimestamp="2025-11-24 12:45:36 +0000 UTC" firstStartedPulling="2025-11-24 12:45:37.708378415 +0000 UTC m=+1070.065892567" lastFinishedPulling="2025-11-24 12:45:41.678993996 +0000 UTC m=+1074.036508148" observedRunningTime="2025-11-24 12:45:43.59595649 +0000 UTC m=+1075.953470632" watchObservedRunningTime="2025-11-24 12:45:43.603811206 +0000 UTC m=+1075.961325348" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.651986 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.510329839 podStartE2EDuration="6.651964856s" podCreationTimestamp="2025-11-24 12:45:37 +0000 UTC" firstStartedPulling="2025-11-24 12:45:38.81780726 +0000 UTC m=+1071.175321402" lastFinishedPulling="2025-11-24 12:45:40.959442277 +0000 UTC m=+1073.316956419" observedRunningTime="2025-11-24 12:45:43.647622256 +0000 UTC m=+1076.005136398" watchObservedRunningTime="2025-11-24 12:45:43.651964856 +0000 UTC m=+1076.009478988" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.899873 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.989989 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config\") pod \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.990219 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs\") pod \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.990489 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config\") pod \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.990555 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlngz\" (UniqueName: \"kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz\") pod \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.990765 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle\") pod \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\" (UID: \"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97\") " Nov 24 12:45:43 crc kubenswrapper[4756]: I1124 12:45:43.998885 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" (UID: "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.002264 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz" (OuterVolumeSpecName: "kube-api-access-mlngz") pod "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" (UID: "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97"). InnerVolumeSpecName "kube-api-access-mlngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.098666 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.099327 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlngz\" (UniqueName: \"kubernetes.io/projected/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-kube-api-access-mlngz\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.137327 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" (UID: "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.171322 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config" (OuterVolumeSpecName: "config") pod "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" (UID: "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.204577 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.204613 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.221931 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5ddf8b8dd6-scmvb"] Nov 24 12:45:44 crc kubenswrapper[4756]: E1124 12:45:44.222338 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb448fa-f259-458b-ba2c-07712e71af0d" containerName="init" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222355 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb448fa-f259-458b-ba2c-07712e71af0d" containerName="init" Nov 24 12:45:44 crc kubenswrapper[4756]: E1124 12:45:44.222372 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-api" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222378 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-api" Nov 24 12:45:44 crc kubenswrapper[4756]: E1124 12:45:44.222386 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-httpd" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222392 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-httpd" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222610 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-httpd" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222628 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb448fa-f259-458b-ba2c-07712e71af0d" containerName="init" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.222639 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" containerName="neutron-api" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.223767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.230818 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.231224 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.237393 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ddf8b8dd6-scmvb"] Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.238332 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" (UID: "c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.313685 4756 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.415946 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpnq\" (UniqueName: \"kubernetes.io/projected/767d77ce-06bb-44dc-b47f-229303527133-kube-api-access-jlpnq\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416047 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-combined-ca-bundle\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416085 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416125 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-public-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416222 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-internal-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416309 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767d77ce-06bb-44dc-b47f-229303527133-logs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.416475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data-custom\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519123 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data-custom\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpnq\" (UniqueName: \"kubernetes.io/projected/767d77ce-06bb-44dc-b47f-229303527133-kube-api-access-jlpnq\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519832 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-combined-ca-bundle\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519861 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519891 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-public-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519947 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-internal-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.519985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767d77ce-06bb-44dc-b47f-229303527133-logs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.520678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/767d77ce-06bb-44dc-b47f-229303527133-logs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.541065 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-public-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.541537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data-custom\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.541708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-config-data\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.541959 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-combined-ca-bundle\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.564560 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/767d77ce-06bb-44dc-b47f-229303527133-internal-tls-certs\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.572650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpnq\" (UniqueName: \"kubernetes.io/projected/767d77ce-06bb-44dc-b47f-229303527133-kube-api-access-jlpnq\") pod \"barbican-api-5ddf8b8dd6-scmvb\" (UID: \"767d77ce-06bb-44dc-b47f-229303527133\") " pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.592808 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff655f7b-j2rvr" event={"ID":"c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97","Type":"ContainerDied","Data":"5edffee6f2b288416682950636922c58335e3fe25c0dea7138a5c70cf93f0e2c"} Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.592872 4756 scope.go:117] "RemoveContainer" containerID="9055e289857069bf82a30011465a6844f89ab4e18f7414413d0ccb0bb3cf7c66" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.593067 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff655f7b-j2rvr" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.611388 4756 generic.go:334] "Generic (PLEG): container finished" podID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerID="d383de0afcbbdecae355a923b106313d4b3d5f5f0a4d664f460b7254e5286b6e" exitCode=0 Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.611558 4756 generic.go:334] "Generic (PLEG): container finished" podID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerID="917ad4d0aab31f6f77dfc32d2113c7345e647bbda154dc5735053e071ba7bb32" exitCode=143 Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.611535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerDied","Data":"d383de0afcbbdecae355a923b106313d4b3d5f5f0a4d664f460b7254e5286b6e"} Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.613051 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerDied","Data":"917ad4d0aab31f6f77dfc32d2113c7345e647bbda154dc5735053e071ba7bb32"} Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.667608 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.669880 4756 scope.go:117] "RemoveContainer" containerID="c50b718041377a553045016e4b00c94cdc1e71857eb88178cbdb72ba0348e8ef" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.671557 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75ff655f7b-j2rvr"] Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.868463 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:44 crc kubenswrapper[4756]: I1124 12:45:44.992767 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.139976 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.140287 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.140865 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.141571 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs" (OuterVolumeSpecName: "logs") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.141675 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.141694 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.142714 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.142761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czkd9\" (UniqueName: \"kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.142847 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data\") pod \"9d688d03-563b-4cc5-b437-c867bd19e02d\" (UID: \"9d688d03-563b-4cc5-b437-c867bd19e02d\") " Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.143529 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d688d03-563b-4cc5-b437-c867bd19e02d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.143542 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d688d03-563b-4cc5-b437-c867bd19e02d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.174204 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9" (OuterVolumeSpecName: "kube-api-access-czkd9") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "kube-api-access-czkd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.176320 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts" (OuterVolumeSpecName: "scripts") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.182315 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.215484 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.246336 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.246844 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.246911 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.246997 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czkd9\" (UniqueName: \"kubernetes.io/projected/9d688d03-563b-4cc5-b437-c867bd19e02d-kube-api-access-czkd9\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.269403 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data" (OuterVolumeSpecName: "config-data") pod "9d688d03-563b-4cc5-b437-c867bd19e02d" (UID: "9d688d03-563b-4cc5-b437-c867bd19e02d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.350028 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d688d03-563b-4cc5-b437-c867bd19e02d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.498075 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ddf8b8dd6-scmvb"] Nov 24 12:45:45 crc kubenswrapper[4756]: W1124 12:45:45.509122 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod767d77ce_06bb_44dc_b47f_229303527133.slice/crio-3d844ce854008f8ed852450a71c243795956828ce19dd573bbbd3891460819b6 WatchSource:0}: Error finding container 3d844ce854008f8ed852450a71c243795956828ce19dd573bbbd3891460819b6: Status 404 returned error can't find the container with id 3d844ce854008f8ed852450a71c243795956828ce19dd573bbbd3891460819b6 Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.625079 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" event={"ID":"767d77ce-06bb-44dc-b47f-229303527133","Type":"ContainerStarted","Data":"3d844ce854008f8ed852450a71c243795956828ce19dd573bbbd3891460819b6"} Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.629512 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9d688d03-563b-4cc5-b437-c867bd19e02d","Type":"ContainerDied","Data":"faf1aac5fe2fa3dc4aabcf8f1a7f5e242e690b7fda945cc636e353044c844543"} Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.629559 4756 scope.go:117] "RemoveContainer" containerID="d383de0afcbbdecae355a923b106313d4b3d5f5f0a4d664f460b7254e5286b6e" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.629595 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.666723 4756 scope.go:117] "RemoveContainer" containerID="917ad4d0aab31f6f77dfc32d2113c7345e647bbda154dc5735053e071ba7bb32" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.689539 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.703521 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.718213 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:45 crc kubenswrapper[4756]: E1124 12:45:45.719106 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api-log" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.719255 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api-log" Nov 24 12:45:45 crc kubenswrapper[4756]: E1124 12:45:45.719328 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.719394 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.719808 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api-log" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.719891 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" containerName="cinder-api" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.725369 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.737141 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.737477 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.741234 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.770234 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.861683 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.861777 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-scripts\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.861886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862026 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862135 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data-custom\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862182 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-logs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862279 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmwz\" (UniqueName: \"kubernetes.io/projected/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-kube-api-access-gwmwz\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.862314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964415 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964501 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-scripts\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964529 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964583 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964624 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964652 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data-custom\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964679 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-logs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmwz\" (UniqueName: \"kubernetes.io/projected/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-kube-api-access-gwmwz\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964837 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.964999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.965368 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-logs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.968820 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-scripts\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.969037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.969787 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.969966 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data-custom\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.971912 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.972765 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-config-data\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:45 crc kubenswrapper[4756]: I1124 12:45:45.991735 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmwz\" (UniqueName: \"kubernetes.io/projected/7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348-kube-api-access-gwmwz\") pod \"cinder-api-0\" (UID: \"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348\") " pod="openstack/cinder-api-0" Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.059472 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.506210 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d688d03-563b-4cc5-b437-c867bd19e02d" path="/var/lib/kubelet/pods/9d688d03-563b-4cc5-b437-c867bd19e02d/volumes" Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.508176 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97" path="/var/lib/kubelet/pods/c0eb15ba-cd0c-47c9-a5e7-eebec05c6e97/volumes" Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.645470 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" event={"ID":"767d77ce-06bb-44dc-b47f-229303527133","Type":"ContainerStarted","Data":"aaedbed2bf524771ae4ea9905cbd809347921dfda27c1fcfd5c5d421ad9b1b87"} Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.645577 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.645612 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" event={"ID":"767d77ce-06bb-44dc-b47f-229303527133","Type":"ContainerStarted","Data":"c7eacd8fff7a5985ec060dd738368f85b9a8890e78c55fdfe0fb2dbcb2b8623b"} Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.646500 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:46 crc kubenswrapper[4756]: I1124 12:45:46.671049 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" podStartSLOduration=2.671027961 podStartE2EDuration="2.671027961s" podCreationTimestamp="2025-11-24 12:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:46.670294821 +0000 UTC m=+1079.027808983" watchObservedRunningTime="2025-11-24 12:45:46.671027961 +0000 UTC m=+1079.028542113" Nov 24 12:45:47 crc kubenswrapper[4756]: I1124 12:45:47.659664 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348","Type":"ContainerStarted","Data":"a551a101d1b6c2f63a876a5ec55a9ac46e35b0cbed8c90c2f8ab2cf477bddf48"} Nov 24 12:45:47 crc kubenswrapper[4756]: I1124 12:45:47.660021 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.011255 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.174223 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.258713 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.259019 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="dnsmasq-dns" containerID="cri-o://1910b17675efe7e7dabaecf04543bc2360c4ac17979ddc9d6b8af5a3678b4248" gracePeriod=10 Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.514335 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.692732 4756 generic.go:334] "Generic (PLEG): container finished" podID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerID="1910b17675efe7e7dabaecf04543bc2360c4ac17979ddc9d6b8af5a3678b4248" exitCode=0 Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.693292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" event={"ID":"f48f5095-0233-463e-b8ce-7bf7bf6e51e3","Type":"ContainerDied","Data":"1910b17675efe7e7dabaecf04543bc2360c4ac17979ddc9d6b8af5a3678b4248"} Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.731573 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348","Type":"ContainerStarted","Data":"5b032f136d72bdd5f4077dbcb41d25e4933ae77ef858a9285895991d59869b54"} Nov 24 12:45:48 crc kubenswrapper[4756]: I1124 12:45:48.809434 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.000804 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066402 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rzqf\" (UniqueName: \"kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066652 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066763 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066786 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.066840 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc\") pod \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\" (UID: \"f48f5095-0233-463e-b8ce-7bf7bf6e51e3\") " Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.084544 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf" (OuterVolumeSpecName: "kube-api-access-4rzqf") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "kube-api-access-4rzqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.157441 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.169634 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rzqf\" (UniqueName: \"kubernetes.io/projected/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-kube-api-access-4rzqf\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.169668 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.189966 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.195872 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.207667 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config" (OuterVolumeSpecName: "config") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.214542 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f48f5095-0233-463e-b8ce-7bf7bf6e51e3" (UID: "f48f5095-0233-463e-b8ce-7bf7bf6e51e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.271492 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.271539 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.271554 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.271566 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f48f5095-0233-463e-b8ce-7bf7bf6e51e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.328697 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.516906 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.746180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" event={"ID":"f48f5095-0233-463e-b8ce-7bf7bf6e51e3","Type":"ContainerDied","Data":"30e621b57d1389928d31f41130fad1b857f9963f5e73b93b5d8e2f254eb9a157"} Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.746545 4756 scope.go:117] "RemoveContainer" containerID="1910b17675efe7e7dabaecf04543bc2360c4ac17979ddc9d6b8af5a3678b4248" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.746222 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-fwcbh" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.768310 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348","Type":"ContainerStarted","Data":"7e459e5f54cbf3f7ce6b69eb4dd656c182e497b9e6ebc0fc06714972cc446098"} Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.768341 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="cinder-scheduler" containerID="cri-o://5e7066b6d01c256307d946b8c7b0c038db4c1b7a4fc8a01bfb846fcc949c90b5" gracePeriod=30 Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.768502 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="probe" containerID="cri-o://b8fa407aa3649695fc77e71d3aeee572cdb2d4b5de50bcd7431b5abf84316d7b" gracePeriod=30 Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.805096 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.829895 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-fwcbh"] Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.837021 4756 scope.go:117] "RemoveContainer" containerID="ea8904a79318528b72df8c293c7091235959c470516d84558bc657123e09e126" Nov 24 12:45:49 crc kubenswrapper[4756]: I1124 12:45:49.842608 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.842584557 podStartE2EDuration="4.842584557s" podCreationTimestamp="2025-11-24 12:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:49.820485697 +0000 UTC m=+1082.177999859" watchObservedRunningTime="2025-11-24 12:45:49.842584557 +0000 UTC m=+1082.200098709" Nov 24 12:45:50 crc kubenswrapper[4756]: I1124 12:45:50.493819 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" path="/var/lib/kubelet/pods/f48f5095-0233-463e-b8ce-7bf7bf6e51e3/volumes" Nov 24 12:45:50 crc kubenswrapper[4756]: I1124 12:45:50.780453 4756 generic.go:334] "Generic (PLEG): container finished" podID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerID="b8fa407aa3649695fc77e71d3aeee572cdb2d4b5de50bcd7431b5abf84316d7b" exitCode=0 Nov 24 12:45:50 crc kubenswrapper[4756]: I1124 12:45:50.780591 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerDied","Data":"b8fa407aa3649695fc77e71d3aeee572cdb2d4b5de50bcd7431b5abf84316d7b"} Nov 24 12:45:50 crc kubenswrapper[4756]: I1124 12:45:50.782566 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 12:45:51 crc kubenswrapper[4756]: I1124 12:45:51.359387 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:45:51 crc kubenswrapper[4756]: I1124 12:45:51.385621 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:45:51 crc kubenswrapper[4756]: I1124 12:45:51.793741 4756 generic.go:334] "Generic (PLEG): container finished" podID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerID="5e7066b6d01c256307d946b8c7b0c038db4c1b7a4fc8a01bfb846fcc949c90b5" exitCode=0 Nov 24 12:45:51 crc kubenswrapper[4756]: I1124 12:45:51.794641 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerDied","Data":"5e7066b6d01c256307d946b8c7b0c038db4c1b7a4fc8a01bfb846fcc949c90b5"} Nov 24 12:45:51 crc kubenswrapper[4756]: I1124 12:45:51.924214 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.030991 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.031317 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.031413 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.031457 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.031510 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8vw5\" (UniqueName: \"kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.031539 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data\") pod \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\" (UID: \"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa\") " Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.033335 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.049754 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5" (OuterVolumeSpecName: "kube-api-access-k8vw5") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "kube-api-access-k8vw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.050597 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.051325 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts" (OuterVolumeSpecName: "scripts") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.103955 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.137199 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.137297 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.137311 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.137322 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.137334 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8vw5\" (UniqueName: \"kubernetes.io/projected/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-kube-api-access-k8vw5\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.147044 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data" (OuterVolumeSpecName: "config-data") pod "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" (UID: "b53e518e-7a09-4b1c-a5cd-c2b03ef950aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.240775 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.807382 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b53e518e-7a09-4b1c-a5cd-c2b03ef950aa","Type":"ContainerDied","Data":"c20de72026fbace770490da1e3b41e7e67946a682d028ee2d4d7e2b2030b66a0"} Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.808380 4756 scope.go:117] "RemoveContainer" containerID="b8fa407aa3649695fc77e71d3aeee572cdb2d4b5de50bcd7431b5abf84316d7b" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.808669 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.834840 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.845887 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.846229 4756 scope.go:117] "RemoveContainer" containerID="5e7066b6d01c256307d946b8c7b0c038db4c1b7a4fc8a01bfb846fcc949c90b5" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.865305 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:52 crc kubenswrapper[4756]: E1124 12:45:52.865808 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="cinder-scheduler" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.865844 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="cinder-scheduler" Nov 24 12:45:52 crc kubenswrapper[4756]: E1124 12:45:52.865866 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="init" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.865876 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="init" Nov 24 12:45:52 crc kubenswrapper[4756]: E1124 12:45:52.865888 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="dnsmasq-dns" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.865897 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="dnsmasq-dns" Nov 24 12:45:52 crc kubenswrapper[4756]: E1124 12:45:52.865930 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="probe" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.865942 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="probe" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.866218 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="probe" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.866264 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48f5095-0233-463e-b8ce-7bf7bf6e51e3" containerName="dnsmasq-dns" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.866281 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" containerName="cinder-scheduler" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.867918 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.880085 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.898300 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957106 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957235 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957433 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-scripts\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957595 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/202e89fe-1aa2-462b-b3cf-2d71151c8de9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:52 crc kubenswrapper[4756]: I1124 12:45:52.957725 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pzp2\" (UniqueName: \"kubernetes.io/projected/202e89fe-1aa2-462b-b3cf-2d71151c8de9-kube-api-access-5pzp2\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/202e89fe-1aa2-462b-b3cf-2d71151c8de9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pzp2\" (UniqueName: \"kubernetes.io/projected/202e89fe-1aa2-462b-b3cf-2d71151c8de9-kube-api-access-5pzp2\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073649 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073714 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073748 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073770 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/202e89fe-1aa2-462b-b3cf-2d71151c8de9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.073793 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-scripts\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.080016 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.080318 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.081655 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.088354 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/202e89fe-1aa2-462b-b3cf-2d71151c8de9-scripts\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.102507 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pzp2\" (UniqueName: \"kubernetes.io/projected/202e89fe-1aa2-462b-b3cf-2d71151c8de9-kube-api-access-5pzp2\") pod \"cinder-scheduler-0\" (UID: \"202e89fe-1aa2-462b-b3cf-2d71151c8de9\") " pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.189820 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.556856 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-585c6478b8-gsbzg" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.627702 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.627954 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon-log" containerID="cri-o://83264baafdec5f896f80ba132bef8428f337228e6f1141cf6f1da54db08c9038" gracePeriod=30 Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.628094 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" containerID="cri-o://d4210a5e297caab6fdf04b86fae8401d9ffa8250dd43be159b9b134b31037963" gracePeriod=30 Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.636390 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 24 12:45:53 crc kubenswrapper[4756]: I1124 12:45:53.798652 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 12:45:53 crc kubenswrapper[4756]: W1124 12:45:53.808323 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202e89fe_1aa2_462b_b3cf_2d71151c8de9.slice/crio-803623ec9536e3d0eb653bb76226d857f92a65e5477b75b2235c8d5418e70fb7 WatchSource:0}: Error finding container 803623ec9536e3d0eb653bb76226d857f92a65e5477b75b2235c8d5418e70fb7: Status 404 returned error can't find the container with id 803623ec9536e3d0eb653bb76226d857f92a65e5477b75b2235c8d5418e70fb7 Nov 24 12:45:54 crc kubenswrapper[4756]: I1124 12:45:54.491129 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53e518e-7a09-4b1c-a5cd-c2b03ef950aa" path="/var/lib/kubelet/pods/b53e518e-7a09-4b1c-a5cd-c2b03ef950aa/volumes" Nov 24 12:45:54 crc kubenswrapper[4756]: I1124 12:45:54.842539 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"202e89fe-1aa2-462b-b3cf-2d71151c8de9","Type":"ContainerStarted","Data":"f4df4630ef84e2d7abe6f7587708fbd18c65e4e131262d810e8e871755f04ca5"} Nov 24 12:45:54 crc kubenswrapper[4756]: I1124 12:45:54.842597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"202e89fe-1aa2-462b-b3cf-2d71151c8de9","Type":"ContainerStarted","Data":"803623ec9536e3d0eb653bb76226d857f92a65e5477b75b2235c8d5418e70fb7"} Nov 24 12:45:55 crc kubenswrapper[4756]: I1124 12:45:55.914511 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"202e89fe-1aa2-462b-b3cf-2d71151c8de9","Type":"ContainerStarted","Data":"55c9fbddc8efb46e8e4f3b68bb8b50838ec1177d92d28bf3899fd8a64d3da466"} Nov 24 12:45:55 crc kubenswrapper[4756]: I1124 12:45:55.963242 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.963203466 podStartE2EDuration="3.963203466s" podCreationTimestamp="2025-11-24 12:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:45:55.960706317 +0000 UTC m=+1088.318220469" watchObservedRunningTime="2025-11-24 12:45:55.963203466 +0000 UTC m=+1088.320717608" Nov 24 12:45:56 crc kubenswrapper[4756]: I1124 12:45:56.661228 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:56 crc kubenswrapper[4756]: I1124 12:45:56.846918 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:56 crc kubenswrapper[4756]: I1124 12:45:56.848468 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f8c9745c6-b4wdj" Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.065651 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:44056->10.217.0.162:8443: read: connection reset by peer" Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.066748 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.092639 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5995df89cc-sxcgq" Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.192874 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ddf8b8dd6-scmvb" Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.305993 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.306654 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74996ffc74-hpkv2" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api-log" containerID="cri-o://65fb38212c4d181639670f15a940278b97431518b19615d7f411a29773cec49a" gracePeriod=30 Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.306795 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74996ffc74-hpkv2" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api" containerID="cri-o://cdad19809a7c02aa62180047b38d02223334a5cba447dbf357aa3bc48ab26742" gracePeriod=30 Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.944984 4756 generic.go:334] "Generic (PLEG): container finished" podID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerID="d4210a5e297caab6fdf04b86fae8401d9ffa8250dd43be159b9b134b31037963" exitCode=0 Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.945048 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerDied","Data":"d4210a5e297caab6fdf04b86fae8401d9ffa8250dd43be159b9b134b31037963"} Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.947415 4756 generic.go:334] "Generic (PLEG): container finished" podID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerID="65fb38212c4d181639670f15a940278b97431518b19615d7f411a29773cec49a" exitCode=143 Nov 24 12:45:57 crc kubenswrapper[4756]: I1124 12:45:57.947464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerDied","Data":"65fb38212c4d181639670f15a940278b97431518b19615d7f411a29773cec49a"} Nov 24 12:45:58 crc kubenswrapper[4756]: I1124 12:45:58.180647 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:45:58 crc kubenswrapper[4756]: I1124 12:45:58.191589 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 12:45:59 crc kubenswrapper[4756]: I1124 12:45:59.078322 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.955045 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.957374 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.960021 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jmq8s" Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.964020 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.964862 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 12:46:00 crc kubenswrapper[4756]: I1124 12:46:00.977217 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.029527 4756 generic.go:334] "Generic (PLEG): container finished" podID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerID="cdad19809a7c02aa62180047b38d02223334a5cba447dbf357aa3bc48ab26742" exitCode=0 Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.029582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerDied","Data":"cdad19809a7c02aa62180047b38d02223334a5cba447dbf357aa3bc48ab26742"} Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.071511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4sb6\" (UniqueName: \"kubernetes.io/projected/08b95fcc-45c6-4618-bdad-3fb8c095e753-kube-api-access-f4sb6\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.071578 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.071979 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config-secret\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.072127 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.177066 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config-secret\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.177406 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.178010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4sb6\" (UniqueName: \"kubernetes.io/projected/08b95fcc-45c6-4618-bdad-3fb8c095e753-kube-api-access-f4sb6\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.178405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.178594 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.183528 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-openstack-config-secret\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.187583 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b95fcc-45c6-4618-bdad-3fb8c095e753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.204574 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4sb6\" (UniqueName: \"kubernetes.io/projected/08b95fcc-45c6-4618-bdad-3fb8c095e753-kube-api-access-f4sb6\") pod \"openstackclient\" (UID: \"08b95fcc-45c6-4618-bdad-3fb8c095e753\") " pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.300612 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.312180 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.384140 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle\") pod \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.384485 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom\") pod \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.384733 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data\") pod \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.385025 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qblsb\" (UniqueName: \"kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb\") pod \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.385842 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs\") pod \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\" (UID: \"3ac68bd7-a5af-4908-8249-bfa5ef938acc\") " Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.386690 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs" (OuterVolumeSpecName: "logs") pod "3ac68bd7-a5af-4908-8249-bfa5ef938acc" (UID: "3ac68bd7-a5af-4908-8249-bfa5ef938acc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.391467 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ac68bd7-a5af-4908-8249-bfa5ef938acc" (UID: "3ac68bd7-a5af-4908-8249-bfa5ef938acc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.398353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb" (OuterVolumeSpecName: "kube-api-access-qblsb") pod "3ac68bd7-a5af-4908-8249-bfa5ef938acc" (UID: "3ac68bd7-a5af-4908-8249-bfa5ef938acc"). InnerVolumeSpecName "kube-api-access-qblsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.468467 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ac68bd7-a5af-4908-8249-bfa5ef938acc" (UID: "3ac68bd7-a5af-4908-8249-bfa5ef938acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.488577 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qblsb\" (UniqueName: \"kubernetes.io/projected/3ac68bd7-a5af-4908-8249-bfa5ef938acc-kube-api-access-qblsb\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.488605 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ac68bd7-a5af-4908-8249-bfa5ef938acc-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.488616 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.488626 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.525694 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data" (OuterVolumeSpecName: "config-data") pod "3ac68bd7-a5af-4908-8249-bfa5ef938acc" (UID: "3ac68bd7-a5af-4908-8249-bfa5ef938acc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.590361 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ac68bd7-a5af-4908-8249-bfa5ef938acc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:01 crc kubenswrapper[4756]: I1124 12:46:01.901769 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 12:46:01 crc kubenswrapper[4756]: W1124 12:46:01.926807 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b95fcc_45c6_4618_bdad_3fb8c095e753.slice/crio-32e838d76bc1c604633c6706e2b82188e4490d6cf4d26e506f14f709182ecdc2 WatchSource:0}: Error finding container 32e838d76bc1c604633c6706e2b82188e4490d6cf4d26e506f14f709182ecdc2: Status 404 returned error can't find the container with id 32e838d76bc1c604633c6706e2b82188e4490d6cf4d26e506f14f709182ecdc2 Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.040429 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"08b95fcc-45c6-4618-bdad-3fb8c095e753","Type":"ContainerStarted","Data":"32e838d76bc1c604633c6706e2b82188e4490d6cf4d26e506f14f709182ecdc2"} Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.043057 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74996ffc74-hpkv2" event={"ID":"3ac68bd7-a5af-4908-8249-bfa5ef938acc","Type":"ContainerDied","Data":"9b091de439eae88567d9bd8980e475c7bf4e714f9bc495f0968d2a1e7750afc0"} Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.043112 4756 scope.go:117] "RemoveContainer" containerID="cdad19809a7c02aa62180047b38d02223334a5cba447dbf357aa3bc48ab26742" Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.043132 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74996ffc74-hpkv2" Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.088183 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.096862 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74996ffc74-hpkv2"] Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.098414 4756 scope.go:117] "RemoveContainer" containerID="65fb38212c4d181639670f15a940278b97431518b19615d7f411a29773cec49a" Nov 24 12:46:02 crc kubenswrapper[4756]: I1124 12:46:02.492321 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" path="/var/lib/kubelet/pods/3ac68bd7-a5af-4908-8249-bfa5ef938acc/volumes" Nov 24 12:46:03 crc kubenswrapper[4756]: I1124 12:46:03.499480 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.760725 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.856663 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.856787 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.856850 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.856961 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.857074 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.857127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.857240 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data\") pod \"03921298-d6d8-404c-9ee5-c5101a92892e\" (UID: \"03921298-d6d8-404c-9ee5-c5101a92892e\") " Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.863570 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.864828 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.884419 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts" (OuterVolumeSpecName: "scripts") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.884430 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k" (OuterVolumeSpecName: "kube-api-access-6ff5k") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "kube-api-access-6ff5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.893944 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.946263 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.958361 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data" (OuterVolumeSpecName: "config-data") pod "03921298-d6d8-404c-9ee5-c5101a92892e" (UID: "03921298-d6d8-404c-9ee5-c5101a92892e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960718 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960757 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960772 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960786 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03921298-d6d8-404c-9ee5-c5101a92892e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960800 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:04 crc kubenswrapper[4756]: I1124 12:46:04.960811 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/03921298-d6d8-404c-9ee5-c5101a92892e-kube-api-access-6ff5k\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.062840 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03921298-d6d8-404c-9ee5-c5101a92892e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.088050 4756 generic.go:334] "Generic (PLEG): container finished" podID="03921298-d6d8-404c-9ee5-c5101a92892e" containerID="48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a" exitCode=137 Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.088200 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerDied","Data":"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a"} Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.088319 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03921298-d6d8-404c-9ee5-c5101a92892e","Type":"ContainerDied","Data":"af3d9542220e4f8fcca1258d972b2e5412e2c6657ff943b906a2e3854f2028b1"} Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.088352 4756 scope.go:117] "RemoveContainer" containerID="48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.088493 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.138942 4756 scope.go:117] "RemoveContainer" containerID="3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.172265 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.177767 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.190584 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.191042 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191056 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.191067 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="proxy-httpd" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191074 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="proxy-httpd" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.191097 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="ceilometer-notification-agent" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191103 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="ceilometer-notification-agent" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.191118 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api-log" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191126 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api-log" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.191146 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="sg-core" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191166 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="sg-core" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191352 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="sg-core" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191366 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="ceilometer-notification-agent" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191387 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api-log" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191396 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac68bd7-a5af-4908-8249-bfa5ef938acc" containerName="barbican-api" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.191407 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" containerName="proxy-httpd" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.192146 4756 scope.go:117] "RemoveContainer" containerID="a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.193304 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.197689 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.197899 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.218719 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.269575 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.269702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.269793 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.269927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.270032 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.270170 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.270458 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgcs\" (UniqueName: \"kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.300874 4756 scope.go:117] "RemoveContainer" containerID="48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.302967 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a\": container with ID starting with 48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a not found: ID does not exist" containerID="48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.303035 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a"} err="failed to get container status \"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a\": rpc error: code = NotFound desc = could not find container \"48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a\": container with ID starting with 48ab96ca918f719608a94ba560c8d4f7cc100e5787e813d9b0b0e94ae767a57a not found: ID does not exist" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.303081 4756 scope.go:117] "RemoveContainer" containerID="3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.304187 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac\": container with ID starting with 3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac not found: ID does not exist" containerID="3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.304242 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac"} err="failed to get container status \"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac\": rpc error: code = NotFound desc = could not find container \"3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac\": container with ID starting with 3e0fa2e0dca47ce5e12faaac5f1ce21785cfa1c9665d3a7b998629a3afa4acac not found: ID does not exist" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.304276 4756 scope.go:117] "RemoveContainer" containerID="a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47" Nov 24 12:46:05 crc kubenswrapper[4756]: E1124 12:46:05.304505 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47\": container with ID starting with a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47 not found: ID does not exist" containerID="a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.304532 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47"} err="failed to get container status \"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47\": rpc error: code = NotFound desc = could not find container \"a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47\": container with ID starting with a3050d179fbdab481a2010a5aaac94e4bfd117b6378e0ed2b08f39d1f8acdb47 not found: ID does not exist" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.372875 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.372981 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.373060 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.373095 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.373124 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.373187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.373288 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsgcs\" (UniqueName: \"kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.374035 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.374386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.379222 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.380071 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.381608 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.389964 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.393788 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsgcs\" (UniqueName: \"kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs\") pod \"ceilometer-0\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.527437 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.920927 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7646996fbc-r65ms"] Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.941025 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.954621 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.954891 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.957574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 12:46:05 crc kubenswrapper[4756]: I1124 12:46:05.973589 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7646996fbc-r65ms"] Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.001677 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-etc-swift\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.001744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-combined-ca-bundle\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.001839 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-internal-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.001928 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-run-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.001970 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-public-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.002068 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-log-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.002107 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-config-data\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.002145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbd7x\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-kube-api-access-tbd7x\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.084341 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:06 crc kubenswrapper[4756]: W1124 12:46:06.091786 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1435c5ee_2d2a_4df3_8ddf_997d77314458.slice/crio-90d32d2fbb2a1bca37230da4d650e1159f5af173b1fde12d744be1879d66b5a3 WatchSource:0}: Error finding container 90d32d2fbb2a1bca37230da4d650e1159f5af173b1fde12d744be1879d66b5a3: Status 404 returned error can't find the container with id 90d32d2fbb2a1bca37230da4d650e1159f5af173b1fde12d744be1879d66b5a3 Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.103870 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-log-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.103917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-config-data\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.103947 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbd7x\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-kube-api-access-tbd7x\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.103986 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-etc-swift\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.104004 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-combined-ca-bundle\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.104037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-internal-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.104073 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-run-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.104093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-public-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.107577 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-run-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.109217 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-public-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.109379 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-log-httpd\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.109591 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-etc-swift\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.110635 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-internal-tls-certs\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.113130 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-combined-ca-bundle\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.113342 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-config-data\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.127216 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbd7x\" (UniqueName: \"kubernetes.io/projected/cfa5b0a5-395b-463e-aeb1-21b5cca10b22-kube-api-access-tbd7x\") pod \"swift-proxy-7646996fbc-r65ms\" (UID: \"cfa5b0a5-395b-463e-aeb1-21b5cca10b22\") " pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.288556 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.494994 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03921298-d6d8-404c-9ee5-c5101a92892e" path="/var/lib/kubelet/pods/03921298-d6d8-404c-9ee5-c5101a92892e/volumes" Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.944754 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7646996fbc-r65ms"] Nov 24 12:46:06 crc kubenswrapper[4756]: W1124 12:46:06.959368 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfa5b0a5_395b_463e_aeb1_21b5cca10b22.slice/crio-2a9972a12ecd0d8a79102270547243a72ada3156d668928138975b1f699f35bb WatchSource:0}: Error finding container 2a9972a12ecd0d8a79102270547243a72ada3156d668928138975b1f699f35bb: Status 404 returned error can't find the container with id 2a9972a12ecd0d8a79102270547243a72ada3156d668928138975b1f699f35bb Nov 24 12:46:06 crc kubenswrapper[4756]: I1124 12:46:06.969051 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Nov 24 12:46:07 crc kubenswrapper[4756]: I1124 12:46:07.127743 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerStarted","Data":"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6"} Nov 24 12:46:07 crc kubenswrapper[4756]: I1124 12:46:07.128238 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerStarted","Data":"90d32d2fbb2a1bca37230da4d650e1159f5af173b1fde12d744be1879d66b5a3"} Nov 24 12:46:07 crc kubenswrapper[4756]: I1124 12:46:07.136387 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7646996fbc-r65ms" event={"ID":"cfa5b0a5-395b-463e-aeb1-21b5cca10b22","Type":"ContainerStarted","Data":"2a9972a12ecd0d8a79102270547243a72ada3156d668928138975b1f699f35bb"} Nov 24 12:46:07 crc kubenswrapper[4756]: I1124 12:46:07.579615 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.147151 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerStarted","Data":"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a"} Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.149680 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7646996fbc-r65ms" event={"ID":"cfa5b0a5-395b-463e-aeb1-21b5cca10b22","Type":"ContainerStarted","Data":"24b75ced2def5c10e6638b0699565485c3ced0883806b9023c9f23670c84bb18"} Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.149716 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7646996fbc-r65ms" event={"ID":"cfa5b0a5-395b-463e-aeb1-21b5cca10b22","Type":"ContainerStarted","Data":"0cf1a173b8ceaa28e135e37a113d2e08b777e8a5f3934542ae616cee05843551"} Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.149948 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.150344 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:08 crc kubenswrapper[4756]: I1124 12:46:08.173496 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7646996fbc-r65ms" podStartSLOduration=3.173475366 podStartE2EDuration="3.173475366s" podCreationTimestamp="2025-11-24 12:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:08.172615472 +0000 UTC m=+1100.530129624" watchObservedRunningTime="2025-11-24 12:46:08.173475366 +0000 UTC m=+1100.530989508" Nov 24 12:46:09 crc kubenswrapper[4756]: I1124 12:46:09.198665 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerStarted","Data":"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf"} Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.278069 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"08b95fcc-45c6-4618-bdad-3fb8c095e753","Type":"ContainerStarted","Data":"1f103cdf44d06a03923505a40f5e6d8155e460417d0040d19026bb372085f094"} Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.282998 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerStarted","Data":"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597"} Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.283099 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-central-agent" containerID="cri-o://74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6" gracePeriod=30 Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.283510 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.283626 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="proxy-httpd" containerID="cri-o://b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597" gracePeriod=30 Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.283698 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-notification-agent" containerID="cri-o://3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a" gracePeriod=30 Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.283805 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="sg-core" containerID="cri-o://320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf" gracePeriod=30 Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.297603 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.302686 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.57215685 podStartE2EDuration="16.302669157s" podCreationTimestamp="2025-11-24 12:46:00 +0000 UTC" firstStartedPulling="2025-11-24 12:46:01.9293676 +0000 UTC m=+1094.286881742" lastFinishedPulling="2025-11-24 12:46:15.659879917 +0000 UTC m=+1108.017394049" observedRunningTime="2025-11-24 12:46:16.297372684 +0000 UTC m=+1108.654886836" watchObservedRunningTime="2025-11-24 12:46:16.302669157 +0000 UTC m=+1108.660183299" Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.313835 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7646996fbc-r65ms" Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.334967 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7735438719999999 podStartE2EDuration="11.3349481s" podCreationTimestamp="2025-11-24 12:46:05 +0000 UTC" firstStartedPulling="2025-11-24 12:46:06.098726176 +0000 UTC m=+1098.456240318" lastFinishedPulling="2025-11-24 12:46:15.660130404 +0000 UTC m=+1108.017644546" observedRunningTime="2025-11-24 12:46:16.327679203 +0000 UTC m=+1108.685193355" watchObservedRunningTime="2025-11-24 12:46:16.3349481 +0000 UTC m=+1108.692462242" Nov 24 12:46:16 crc kubenswrapper[4756]: I1124 12:46:16.969393 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b958b5cb8-lff28" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296368 4756 generic.go:334] "Generic (PLEG): container finished" podID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerID="320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf" exitCode=2 Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296662 4756 generic.go:334] "Generic (PLEG): container finished" podID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerID="3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a" exitCode=0 Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296675 4756 generic.go:334] "Generic (PLEG): container finished" podID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerID="74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6" exitCode=0 Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerDied","Data":"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf"} Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerDied","Data":"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a"} Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.296794 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerDied","Data":"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6"} Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.563827 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qn6sw"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.565255 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.580670 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qn6sw"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.667286 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2m2\" (UniqueName: \"kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.667356 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.667911 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2119-account-create-2ctwr"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.669435 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.671574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.679885 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-45l44"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.681328 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.695399 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2119-account-create-2ctwr"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.706652 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-45l44"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770464 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprlh\" (UniqueName: \"kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770661 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztn4\" (UniqueName: \"kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx2m2\" (UniqueName: \"kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.770790 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.771766 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.785185 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mbg9c"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.786811 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.793435 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx2m2\" (UniqueName: \"kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2\") pod \"nova-api-db-create-qn6sw\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.808060 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mbg9c"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873149 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873242 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873281 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprlh\" (UniqueName: \"kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873334 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bztn4\" (UniqueName: \"kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.873366 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mkn\" (UniqueName: \"kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.874359 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.874936 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.886374 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.900017 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fd9c-account-create-zxl5z"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.900380 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bztn4\" (UniqueName: \"kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4\") pod \"nova-api-2119-account-create-2ctwr\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.902671 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprlh\" (UniqueName: \"kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh\") pod \"nova-cell0-db-create-45l44\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.902705 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.907515 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.916541 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fd9c-account-create-zxl5z"] Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.975766 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9mkn\" (UniqueName: \"kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.976381 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmhm\" (UniqueName: \"kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.976547 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.976602 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:17 crc kubenswrapper[4756]: I1124 12:46:17.983668 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:17.991748 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.000904 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9mkn\" (UniqueName: \"kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn\") pod \"nova-cell1-db-create-mbg9c\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.003811 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.014245 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ba07-account-create-gdv5z"] Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.019357 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.021623 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.038238 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba07-account-create-gdv5z"] Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.082682 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.082836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrns\" (UniqueName: \"kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.082957 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmhm\" (UniqueName: \"kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.082993 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.083837 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.115513 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmhm\" (UniqueName: \"kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm\") pod \"nova-cell0-fd9c-account-create-zxl5z\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.125274 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.175662 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.194575 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.194999 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrns\" (UniqueName: \"kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.196258 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.235799 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrns\" (UniqueName: \"kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns\") pod \"nova-cell1-ba07-account-create-gdv5z\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.439780 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.729120 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qn6sw"] Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.822133 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-45l44"] Nov 24 12:46:18 crc kubenswrapper[4756]: I1124 12:46:18.890830 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2119-account-create-2ctwr"] Nov 24 12:46:18 crc kubenswrapper[4756]: W1124 12:46:18.892537 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97cfae0a_2fbc_495b_8084_b0b1701e541b.slice/crio-50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b WatchSource:0}: Error finding container 50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b: Status 404 returned error can't find the container with id 50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.148939 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fd9c-account-create-zxl5z"] Nov 24 12:46:19 crc kubenswrapper[4756]: W1124 12:46:19.149806 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5090b91_9f36_4fb1_95d0_aa6a48ae2bed.slice/crio-aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c WatchSource:0}: Error finding container aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c: Status 404 returned error can't find the container with id aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.265026 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ba07-account-create-gdv5z"] Nov 24 12:46:19 crc kubenswrapper[4756]: W1124 12:46:19.268903 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346aceff_936e_4266_aede_d48f252896f0.slice/crio-feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141 WatchSource:0}: Error finding container feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141: Status 404 returned error can't find the container with id feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141 Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.277689 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mbg9c"] Nov 24 12:46:19 crc kubenswrapper[4756]: W1124 12:46:19.305432 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40bd0562_377e_44fe_9e32_5b8256063b23.slice/crio-51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636 WatchSource:0}: Error finding container 51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636: Status 404 returned error can't find the container with id 51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636 Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.374231 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba07-account-create-gdv5z" event={"ID":"346aceff-936e-4266-aede-d48f252896f0","Type":"ContainerStarted","Data":"feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.380748 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" event={"ID":"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed","Type":"ContainerStarted","Data":"aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.383638 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qn6sw" event={"ID":"fb2a0ae3-8559-497c-becc-d2b6dc77065c","Type":"ContainerStarted","Data":"68dc6630fc142541cc6f1a5f6eaef68e13f266a74cf2b4b23ae70f39b7a6d4ba"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.386734 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mbg9c" event={"ID":"40bd0562-377e-44fe-9e32-5b8256063b23","Type":"ContainerStarted","Data":"51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.390485 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45l44" event={"ID":"97cfae0a-2fbc-495b-8084-b0b1701e541b","Type":"ContainerStarted","Data":"50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.394674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2119-account-create-2ctwr" event={"ID":"59840fc2-c2e9-4258-8857-7c48709f1436","Type":"ContainerStarted","Data":"0b94f4f2330950117fe84ea566cd6023d523990851f16c14fc8d2a689b1c22e0"} Nov 24 12:46:19 crc kubenswrapper[4756]: I1124 12:46:19.418788 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2119-account-create-2ctwr" podStartSLOduration=2.418758241 podStartE2EDuration="2.418758241s" podCreationTimestamp="2025-11-24 12:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:19.417849886 +0000 UTC m=+1111.775364048" watchObservedRunningTime="2025-11-24 12:46:19.418758241 +0000 UTC m=+1111.776272373" Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.412510 4756 generic.go:334] "Generic (PLEG): container finished" podID="40bd0562-377e-44fe-9e32-5b8256063b23" containerID="30a76c4adc68a52b75c47cb2f2be3ecddbb7554883535a49cbb7ea5746deecb6" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.412597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mbg9c" event={"ID":"40bd0562-377e-44fe-9e32-5b8256063b23","Type":"ContainerDied","Data":"30a76c4adc68a52b75c47cb2f2be3ecddbb7554883535a49cbb7ea5746deecb6"} Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.414979 4756 generic.go:334] "Generic (PLEG): container finished" podID="97cfae0a-2fbc-495b-8084-b0b1701e541b" containerID="ab9299a31a21747d7b25a7f96259d026a67dd463f6d0ce87a644694df9c6d32e" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.415050 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45l44" event={"ID":"97cfae0a-2fbc-495b-8084-b0b1701e541b","Type":"ContainerDied","Data":"ab9299a31a21747d7b25a7f96259d026a67dd463f6d0ce87a644694df9c6d32e"} Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.418148 4756 generic.go:334] "Generic (PLEG): container finished" podID="59840fc2-c2e9-4258-8857-7c48709f1436" containerID="686728c23356b5906da3533de4d5d4d335cdeb3de3f10c32716073715c24af4f" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.418240 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2119-account-create-2ctwr" event={"ID":"59840fc2-c2e9-4258-8857-7c48709f1436","Type":"ContainerDied","Data":"686728c23356b5906da3533de4d5d4d335cdeb3de3f10c32716073715c24af4f"} Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.420366 4756 generic.go:334] "Generic (PLEG): container finished" podID="346aceff-936e-4266-aede-d48f252896f0" containerID="83148707b1b4f63aaaaa8abdaf29d945c528d087c54856ddef4a70c7d6de879b" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.420424 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba07-account-create-gdv5z" event={"ID":"346aceff-936e-4266-aede-d48f252896f0","Type":"ContainerDied","Data":"83148707b1b4f63aaaaa8abdaf29d945c528d087c54856ddef4a70c7d6de879b"} Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.422396 4756 generic.go:334] "Generic (PLEG): container finished" podID="f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" containerID="471257f0bae6820d9871d58a02713c0a0a133c2ea5884e630e526b2109354f36" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.422444 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" event={"ID":"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed","Type":"ContainerDied","Data":"471257f0bae6820d9871d58a02713c0a0a133c2ea5884e630e526b2109354f36"} Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.423901 4756 generic.go:334] "Generic (PLEG): container finished" podID="fb2a0ae3-8559-497c-becc-d2b6dc77065c" containerID="3068fdd3e96ede4aa0d3f49496431361e47567a25276ad28c333b6a0f4daf407" exitCode=0 Nov 24 12:46:20 crc kubenswrapper[4756]: I1124 12:46:20.423931 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qn6sw" event={"ID":"fb2a0ae3-8559-497c-becc-d2b6dc77065c","Type":"ContainerDied","Data":"3068fdd3e96ede4aa0d3f49496431361e47567a25276ad28c333b6a0f4daf407"} Nov 24 12:46:21 crc kubenswrapper[4756]: I1124 12:46:21.927283 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.013300 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts\") pod \"59840fc2-c2e9-4258-8857-7c48709f1436\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.013362 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bztn4\" (UniqueName: \"kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4\") pod \"59840fc2-c2e9-4258-8857-7c48709f1436\" (UID: \"59840fc2-c2e9-4258-8857-7c48709f1436\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.014127 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59840fc2-c2e9-4258-8857-7c48709f1436" (UID: "59840fc2-c2e9-4258-8857-7c48709f1436"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.033310 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59840fc2-c2e9-4258-8857-7c48709f1436-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.037359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4" (OuterVolumeSpecName: "kube-api-access-bztn4") pod "59840fc2-c2e9-4258-8857-7c48709f1436" (UID: "59840fc2-c2e9-4258-8857-7c48709f1436"). InnerVolumeSpecName "kube-api-access-bztn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.140599 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bztn4\" (UniqueName: \"kubernetes.io/projected/59840fc2-c2e9-4258-8857-7c48709f1436-kube-api-access-bztn4\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.354144 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.359091 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.367244 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.377825 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.392487 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.447381 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts\") pod \"346aceff-936e-4266-aede-d48f252896f0\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.447535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts\") pod \"97cfae0a-2fbc-495b-8084-b0b1701e541b\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.447568 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts\") pod \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.447585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts\") pod \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.447882 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9mkn\" (UniqueName: \"kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn\") pod \"40bd0562-377e-44fe-9e32-5b8256063b23\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.448050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlrns\" (UniqueName: \"kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns\") pod \"346aceff-936e-4266-aede-d48f252896f0\" (UID: \"346aceff-936e-4266-aede-d48f252896f0\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.448225 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sprlh\" (UniqueName: \"kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh\") pod \"97cfae0a-2fbc-495b-8084-b0b1701e541b\" (UID: \"97cfae0a-2fbc-495b-8084-b0b1701e541b\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.448262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmhm\" (UniqueName: \"kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm\") pod \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\" (UID: \"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.448320 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts\") pod \"40bd0562-377e-44fe-9e32-5b8256063b23\" (UID: \"40bd0562-377e-44fe-9e32-5b8256063b23\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.448343 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx2m2\" (UniqueName: \"kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2\") pod \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\" (UID: \"fb2a0ae3-8559-497c-becc-d2b6dc77065c\") " Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.449031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb2a0ae3-8559-497c-becc-d2b6dc77065c" (UID: "fb2a0ae3-8559-497c-becc-d2b6dc77065c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.449775 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "346aceff-936e-4266-aede-d48f252896f0" (UID: "346aceff-936e-4266-aede-d48f252896f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.450359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97cfae0a-2fbc-495b-8084-b0b1701e541b" (UID: "97cfae0a-2fbc-495b-8084-b0b1701e541b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.451081 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40bd0562-377e-44fe-9e32-5b8256063b23" (UID: "40bd0562-377e-44fe-9e32-5b8256063b23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.452476 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" (UID: "f5090b91-9f36-4fb1-95d0-aa6a48ae2bed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.458353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns" (OuterVolumeSpecName: "kube-api-access-wlrns") pod "346aceff-936e-4266-aede-d48f252896f0" (UID: "346aceff-936e-4266-aede-d48f252896f0"). InnerVolumeSpecName "kube-api-access-wlrns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.466490 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm" (OuterVolumeSpecName: "kube-api-access-fxmhm") pod "f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" (UID: "f5090b91-9f36-4fb1-95d0-aa6a48ae2bed"). InnerVolumeSpecName "kube-api-access-fxmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.467594 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh" (OuterVolumeSpecName: "kube-api-access-sprlh") pod "97cfae0a-2fbc-495b-8084-b0b1701e541b" (UID: "97cfae0a-2fbc-495b-8084-b0b1701e541b"). InnerVolumeSpecName "kube-api-access-sprlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.468362 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2" (OuterVolumeSpecName: "kube-api-access-vx2m2") pod "fb2a0ae3-8559-497c-becc-d2b6dc77065c" (UID: "fb2a0ae3-8559-497c-becc-d2b6dc77065c"). InnerVolumeSpecName "kube-api-access-vx2m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.473814 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn" (OuterVolumeSpecName: "kube-api-access-w9mkn") pod "40bd0562-377e-44fe-9e32-5b8256063b23" (UID: "40bd0562-377e-44fe-9e32-5b8256063b23"). InnerVolumeSpecName "kube-api-access-w9mkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.477531 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qn6sw" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.479845 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mbg9c" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.482622 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45l44" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.490607 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2119-account-create-2ctwr" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.493194 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ba07-account-create-gdv5z" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.500945 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513196 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qn6sw" event={"ID":"fb2a0ae3-8559-497c-becc-d2b6dc77065c","Type":"ContainerDied","Data":"68dc6630fc142541cc6f1a5f6eaef68e13f266a74cf2b4b23ae70f39b7a6d4ba"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513244 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68dc6630fc142541cc6f1a5f6eaef68e13f266a74cf2b4b23ae70f39b7a6d4ba" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513256 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mbg9c" event={"ID":"40bd0562-377e-44fe-9e32-5b8256063b23","Type":"ContainerDied","Data":"51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513266 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51c8c69f79213f0cd63cc18a7605fa78614f849ce2ae606deb4f465853c1a636" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45l44" event={"ID":"97cfae0a-2fbc-495b-8084-b0b1701e541b","Type":"ContainerDied","Data":"50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513283 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f5b1df3b604b7cdf5621faf8a739c47741ce69ce94f20bfadc5e73e4b2863b" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2119-account-create-2ctwr" event={"ID":"59840fc2-c2e9-4258-8857-7c48709f1436","Type":"ContainerDied","Data":"0b94f4f2330950117fe84ea566cd6023d523990851f16c14fc8d2a689b1c22e0"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513304 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b94f4f2330950117fe84ea566cd6023d523990851f16c14fc8d2a689b1c22e0" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513320 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ba07-account-create-gdv5z" event={"ID":"346aceff-936e-4266-aede-d48f252896f0","Type":"ContainerDied","Data":"feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513335 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feff45e0e3989d0f2137311fbd72145daaf170213b0b98ae137424704bdfe141" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513346 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fd9c-account-create-zxl5z" event={"ID":"f5090b91-9f36-4fb1-95d0-aa6a48ae2bed","Type":"ContainerDied","Data":"aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c"} Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.513357 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aacfe5b0b01d86f37411cfdda935834ee97b37cf5e4b2a1705568ca39363a54c" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551452 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40bd0562-377e-44fe-9e32-5b8256063b23-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551491 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx2m2\" (UniqueName: \"kubernetes.io/projected/fb2a0ae3-8559-497c-becc-d2b6dc77065c-kube-api-access-vx2m2\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551505 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346aceff-936e-4266-aede-d48f252896f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551518 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97cfae0a-2fbc-495b-8084-b0b1701e541b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551529 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb2a0ae3-8559-497c-becc-d2b6dc77065c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551542 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551553 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9mkn\" (UniqueName: \"kubernetes.io/projected/40bd0562-377e-44fe-9e32-5b8256063b23-kube-api-access-w9mkn\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551565 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlrns\" (UniqueName: \"kubernetes.io/projected/346aceff-936e-4266-aede-d48f252896f0-kube-api-access-wlrns\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551577 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sprlh\" (UniqueName: \"kubernetes.io/projected/97cfae0a-2fbc-495b-8084-b0b1701e541b-kube-api-access-sprlh\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.551589 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmhm\" (UniqueName: \"kubernetes.io/projected/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed-kube-api-access-fxmhm\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.970688 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:22 crc kubenswrapper[4756]: I1124 12:46:22.971350 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" containerName="watcher-decision-engine" containerID="cri-o://f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0" gracePeriod=30 Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.532433 4756 generic.go:334] "Generic (PLEG): container finished" podID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerID="83264baafdec5f896f80ba132bef8428f337228e6f1141cf6f1da54db08c9038" exitCode=137 Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.532620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerDied","Data":"83264baafdec5f896f80ba132bef8428f337228e6f1141cf6f1da54db08c9038"} Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.713693 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.805666 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmgv\" (UniqueName: \"kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.805733 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.805830 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.805919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.806019 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.806052 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.806102 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle\") pod \"ac680988-de91-4b39-ac09-3938cd5a2f91\" (UID: \"ac680988-de91-4b39-ac09-3938cd5a2f91\") " Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.807552 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs" (OuterVolumeSpecName: "logs") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.807748 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac680988-de91-4b39-ac09-3938cd5a2f91-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.831359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.831521 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv" (OuterVolumeSpecName: "kube-api-access-pcmgv") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "kube-api-access-pcmgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.890970 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts" (OuterVolumeSpecName: "scripts") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.915776 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmgv\" (UniqueName: \"kubernetes.io/projected/ac680988-de91-4b39-ac09-3938cd5a2f91-kube-api-access-pcmgv\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.915828 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.915846 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.921781 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data" (OuterVolumeSpecName: "config-data") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.923530 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:24 crc kubenswrapper[4756]: I1124 12:46:24.966797 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ac680988-de91-4b39-ac09-3938cd5a2f91" (UID: "ac680988-de91-4b39-ac09-3938cd5a2f91"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:24.998681 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.019685 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac680988-de91-4b39-ac09-3938cd5a2f91-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.019730 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.019744 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac680988-de91-4b39-ac09-3938cd5a2f91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.120977 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca\") pod \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.121291 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs\") pod \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.121343 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data\") pod \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.121360 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle\") pod \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.121450 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjss2\" (UniqueName: \"kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2\") pod \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\" (UID: \"9e9988f7-fa01-4411-986d-ac6ba024a7a5\") " Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.124474 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs" (OuterVolumeSpecName: "logs") pod "9e9988f7-fa01-4411-986d-ac6ba024a7a5" (UID: "9e9988f7-fa01-4411-986d-ac6ba024a7a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.127006 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2" (OuterVolumeSpecName: "kube-api-access-bjss2") pod "9e9988f7-fa01-4411-986d-ac6ba024a7a5" (UID: "9e9988f7-fa01-4411-986d-ac6ba024a7a5"). InnerVolumeSpecName "kube-api-access-bjss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.162171 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9988f7-fa01-4411-986d-ac6ba024a7a5" (UID: "9e9988f7-fa01-4411-986d-ac6ba024a7a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.171246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9e9988f7-fa01-4411-986d-ac6ba024a7a5" (UID: "9e9988f7-fa01-4411-986d-ac6ba024a7a5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.183343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data" (OuterVolumeSpecName: "config-data") pod "9e9988f7-fa01-4411-986d-ac6ba024a7a5" (UID: "9e9988f7-fa01-4411-986d-ac6ba024a7a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.233292 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9988f7-fa01-4411-986d-ac6ba024a7a5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.233337 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.233379 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.233394 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjss2\" (UniqueName: \"kubernetes.io/projected/9e9988f7-fa01-4411-986d-ac6ba024a7a5-kube-api-access-bjss2\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.233408 4756 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9988f7-fa01-4411-986d-ac6ba024a7a5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.546311 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b958b5cb8-lff28" event={"ID":"ac680988-de91-4b39-ac09-3938cd5a2f91","Type":"ContainerDied","Data":"8cbe0b3ecfef46f7d68524226ae308001ff6c658bdf2936cfebcadb0fe5fb156"} Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.546388 4756 scope.go:117] "RemoveContainer" containerID="d4210a5e297caab6fdf04b86fae8401d9ffa8250dd43be159b9b134b31037963" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.546523 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b958b5cb8-lff28" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.560980 4756 generic.go:334] "Generic (PLEG): container finished" podID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" containerID="f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0" exitCode=0 Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.561035 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9988f7-fa01-4411-986d-ac6ba024a7a5","Type":"ContainerDied","Data":"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0"} Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.561068 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9988f7-fa01-4411-986d-ac6ba024a7a5","Type":"ContainerDied","Data":"d615550f2cfef9a71ae95420f9f8de433cf1d807f61105c64342305700f02ed3"} Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.561130 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.660174 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.675312 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700187 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700710 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700734 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700752 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700758 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700770 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59840fc2-c2e9-4258-8857-7c48709f1436" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700779 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="59840fc2-c2e9-4258-8857-7c48709f1436" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700795 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bd0562-377e-44fe-9e32-5b8256063b23" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700801 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bd0562-377e-44fe-9e32-5b8256063b23" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700818 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cfae0a-2fbc-495b-8084-b0b1701e541b" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700824 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cfae0a-2fbc-495b-8084-b0b1701e541b" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700835 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" containerName="watcher-decision-engine" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700841 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" containerName="watcher-decision-engine" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700860 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346aceff-936e-4266-aede-d48f252896f0" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700866 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="346aceff-936e-4266-aede-d48f252896f0" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700885 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon-log" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700891 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon-log" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.700902 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2a0ae3-8559-497c-becc-d2b6dc77065c" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.700909 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2a0ae3-8559-497c-becc-d2b6dc77065c" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701090 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2a0ae3-8559-497c-becc-d2b6dc77065c" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701101 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="40bd0562-377e-44fe-9e32-5b8256063b23" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701135 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="59840fc2-c2e9-4258-8857-7c48709f1436" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701143 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701168 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="97cfae0a-2fbc-495b-8084-b0b1701e541b" containerName="mariadb-database-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701179 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" containerName="watcher-decision-engine" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701186 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="346aceff-936e-4266-aede-d48f252896f0" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701201 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" containerName="horizon-log" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701215 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" containerName="mariadb-account-create" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.701928 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.706999 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.715879 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.753645 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.780270 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b958b5cb8-lff28"] Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.820033 4756 scope.go:117] "RemoveContainer" containerID="83264baafdec5f896f80ba132bef8428f337228e6f1141cf6f1da54db08c9038" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.848014 4756 scope.go:117] "RemoveContainer" containerID="f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.849091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.849243 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eff196d-2bdb-48c0-9c64-f8f0836f5450-logs\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.849486 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-config-data\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.849651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25fd\" (UniqueName: \"kubernetes.io/projected/6eff196d-2bdb-48c0-9c64-f8f0836f5450-kube-api-access-x25fd\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.849794 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.871653 4756 scope.go:117] "RemoveContainer" containerID="f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0" Nov 24 12:46:25 crc kubenswrapper[4756]: E1124 12:46:25.872280 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0\": container with ID starting with f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0 not found: ID does not exist" containerID="f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.872382 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0"} err="failed to get container status \"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0\": rpc error: code = NotFound desc = could not find container \"f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0\": container with ID starting with f70104dffd593cb2056af9068fbf795a17c20dd15ab1bba51c7806fc88b1d0b0 not found: ID does not exist" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.952102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25fd\" (UniqueName: \"kubernetes.io/projected/6eff196d-2bdb-48c0-9c64-f8f0836f5450-kube-api-access-x25fd\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.952207 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.952290 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.952376 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eff196d-2bdb-48c0-9c64-f8f0836f5450-logs\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.952453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-config-data\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.953064 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eff196d-2bdb-48c0-9c64-f8f0836f5450-logs\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.956457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.956688 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-config-data\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.957015 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eff196d-2bdb-48c0-9c64-f8f0836f5450-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:25 crc kubenswrapper[4756]: I1124 12:46:25.978222 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25fd\" (UniqueName: \"kubernetes.io/projected/6eff196d-2bdb-48c0-9c64-f8f0836f5450-kube-api-access-x25fd\") pod \"watcher-decision-engine-0\" (UID: \"6eff196d-2bdb-48c0-9c64-f8f0836f5450\") " pod="openstack/watcher-decision-engine-0" Nov 24 12:46:26 crc kubenswrapper[4756]: I1124 12:46:26.031338 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:26 crc kubenswrapper[4756]: I1124 12:46:26.488487 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9988f7-fa01-4411-986d-ac6ba024a7a5" path="/var/lib/kubelet/pods/9e9988f7-fa01-4411-986d-ac6ba024a7a5/volumes" Nov 24 12:46:26 crc kubenswrapper[4756]: I1124 12:46:26.489864 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac680988-de91-4b39-ac09-3938cd5a2f91" path="/var/lib/kubelet/pods/ac680988-de91-4b39-ac09-3938cd5a2f91/volumes" Nov 24 12:46:26 crc kubenswrapper[4756]: I1124 12:46:26.527860 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Nov 24 12:46:26 crc kubenswrapper[4756]: I1124 12:46:26.574424 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"6eff196d-2bdb-48c0-9c64-f8f0836f5450","Type":"ContainerStarted","Data":"aeb1f4a3bf29a4ddf7ccb109dc15ba139975c23634ccf262dfbbe95742ac91f5"} Nov 24 12:46:27 crc kubenswrapper[4756]: I1124 12:46:27.591074 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"6eff196d-2bdb-48c0-9c64-f8f0836f5450","Type":"ContainerStarted","Data":"e7ae23f442b3fa71819a4fae252741dbd254184b3f63f68fe1494624e882ddad"} Nov 24 12:46:27 crc kubenswrapper[4756]: I1124 12:46:27.620575 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.620551214 podStartE2EDuration="2.620551214s" podCreationTimestamp="2025-11-24 12:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:27.617283316 +0000 UTC m=+1119.974797478" watchObservedRunningTime="2025-11-24 12:46:27.620551214 +0000 UTC m=+1119.978065356" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.364778 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mxm8r"] Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.366764 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.370136 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.372612 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cd4xw" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.375501 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.385979 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mxm8r"] Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.421429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.421536 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.421651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.421679 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhs6t\" (UniqueName: \"kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.523267 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.523447 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.523480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhs6t\" (UniqueName: \"kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.523556 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.529668 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.530458 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.531193 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.547864 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhs6t\" (UniqueName: \"kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t\") pod \"nova-cell0-conductor-db-sync-mxm8r\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:28 crc kubenswrapper[4756]: I1124 12:46:28.690916 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:29 crc kubenswrapper[4756]: I1124 12:46:29.056004 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mxm8r"] Nov 24 12:46:29 crc kubenswrapper[4756]: I1124 12:46:29.611810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" event={"ID":"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8","Type":"ContainerStarted","Data":"a8f185ca9998fccb9f7730616e1e41727539fdd5927355d9a6f6118a682dfed4"} Nov 24 12:46:35 crc kubenswrapper[4756]: I1124 12:46:35.540095 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 12:46:36 crc kubenswrapper[4756]: I1124 12:46:36.032669 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:36 crc kubenswrapper[4756]: I1124 12:46:36.066098 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:36 crc kubenswrapper[4756]: I1124 12:46:36.711218 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:36 crc kubenswrapper[4756]: I1124 12:46:36.749354 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Nov 24 12:46:38 crc kubenswrapper[4756]: I1124 12:46:38.731866 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" event={"ID":"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8","Type":"ContainerStarted","Data":"f9cdafc88327978245679d7b1fb94759b02f09e765a19779a0c43261a7192ff8"} Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.780663 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.790299 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" podStartSLOduration=9.996588447 podStartE2EDuration="18.790275336s" podCreationTimestamp="2025-11-24 12:46:28 +0000 UTC" firstStartedPulling="2025-11-24 12:46:29.054558709 +0000 UTC m=+1121.412072851" lastFinishedPulling="2025-11-24 12:46:37.848245598 +0000 UTC m=+1130.205759740" observedRunningTime="2025-11-24 12:46:38.76042869 +0000 UTC m=+1131.117942822" watchObservedRunningTime="2025-11-24 12:46:46.790275336 +0000 UTC m=+1139.147789478" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.800995 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.801314 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-log" containerID="cri-o://7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934" gracePeriod=30 Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.801359 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-httpd" containerID="cri-o://52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71" gracePeriod=30 Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.829599 4756 generic.go:334] "Generic (PLEG): container finished" podID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerID="b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597" exitCode=137 Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.829681 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerDied","Data":"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597"} Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.829726 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.829743 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1435c5ee-2d2a-4df3-8ddf-997d77314458","Type":"ContainerDied","Data":"90d32d2fbb2a1bca37230da4d650e1159f5af173b1fde12d744be1879d66b5a3"} Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.829779 4756 scope.go:117] "RemoveContainer" containerID="b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.863736 4756 scope.go:117] "RemoveContainer" containerID="320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.891267 4756 scope.go:117] "RemoveContainer" containerID="3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944108 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944181 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgcs\" (UniqueName: \"kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944253 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944311 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944429 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944491 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944619 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml\") pod \"1435c5ee-2d2a-4df3-8ddf-997d77314458\" (UID: \"1435c5ee-2d2a-4df3-8ddf-997d77314458\") " Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.944927 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.945095 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.945132 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.953546 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts" (OuterVolumeSpecName: "scripts") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.953554 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs" (OuterVolumeSpecName: "kube-api-access-nsgcs") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "kube-api-access-nsgcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:46 crc kubenswrapper[4756]: I1124 12:46:46.986486 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.004494 4756 scope.go:117] "RemoveContainer" containerID="74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.028561 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.046860 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.046906 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsgcs\" (UniqueName: \"kubernetes.io/projected/1435c5ee-2d2a-4df3-8ddf-997d77314458-kube-api-access-nsgcs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.046924 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.046937 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1435c5ee-2d2a-4df3-8ddf-997d77314458-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.046948 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.080342 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data" (OuterVolumeSpecName: "config-data") pod "1435c5ee-2d2a-4df3-8ddf-997d77314458" (UID: "1435c5ee-2d2a-4df3-8ddf-997d77314458"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.111615 4756 scope.go:117] "RemoveContainer" containerID="b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.112191 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597\": container with ID starting with b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597 not found: ID does not exist" containerID="b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.112248 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597"} err="failed to get container status \"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597\": rpc error: code = NotFound desc = could not find container \"b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597\": container with ID starting with b6be1a47c0fdbb649402219ec418d5f3b64be81852b44ae66611962303e36597 not found: ID does not exist" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.112282 4756 scope.go:117] "RemoveContainer" containerID="320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.112925 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf\": container with ID starting with 320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf not found: ID does not exist" containerID="320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.112961 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf"} err="failed to get container status \"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf\": rpc error: code = NotFound desc = could not find container \"320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf\": container with ID starting with 320cfb75dceba55e8aa6726f6824879c666ff9d713f28266c0c091c11b85cdcf not found: ID does not exist" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.113012 4756 scope.go:117] "RemoveContainer" containerID="3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.113737 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a\": container with ID starting with 3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a not found: ID does not exist" containerID="3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.113760 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a"} err="failed to get container status \"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a\": rpc error: code = NotFound desc = could not find container \"3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a\": container with ID starting with 3ff38fc93a5d15fa4d300c96d315409e10d65fb7562e1ca40fa597082010fc1a not found: ID does not exist" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.113788 4756 scope.go:117] "RemoveContainer" containerID="74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.114146 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6\": container with ID starting with 74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6 not found: ID does not exist" containerID="74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.114212 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6"} err="failed to get container status \"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6\": rpc error: code = NotFound desc = could not find container \"74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6\": container with ID starting with 74077fec863537107e12fb89604cf75ab1089d987946258efb47b79ce3d5d0b6 not found: ID does not exist" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.148247 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1435c5ee-2d2a-4df3-8ddf-997d77314458-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.168353 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.185439 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194002 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.194577 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-central-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194605 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-central-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.194633 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="proxy-httpd" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194642 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="proxy-httpd" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.194665 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="sg-core" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194672 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="sg-core" Nov 24 12:46:47 crc kubenswrapper[4756]: E1124 12:46:47.194688 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-notification-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194696 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-notification-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.194967 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-central-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.195004 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="ceilometer-notification-agent" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.195016 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="proxy-httpd" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.195029 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" containerName="sg-core" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.197610 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.201782 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.202084 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.203776 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.351591 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.351955 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8jr\" (UniqueName: \"kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.351987 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.352044 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.352086 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.352109 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.352151 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.454684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.454752 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.454864 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8jr\" (UniqueName: \"kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.454917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.454990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.455627 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.455663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.456188 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.456233 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.460293 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.460485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.460863 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.461017 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.477196 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8jr\" (UniqueName: \"kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr\") pod \"ceilometer-0\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.523774 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.845922 4756 generic.go:334] "Generic (PLEG): container finished" podID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerID="7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934" exitCode=143 Nov 24 12:46:47 crc kubenswrapper[4756]: I1124 12:46:47.846334 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerDied","Data":"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934"} Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.016810 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.017050 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-log" containerID="cri-o://3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1" gracePeriod=30 Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.017205 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-httpd" containerID="cri-o://98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7" gracePeriod=30 Nov 24 12:46:48 crc kubenswrapper[4756]: W1124 12:46:48.069052 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7387748c_feab_4e70_96a1_a95254fb9ba9.slice/crio-fa3afd0542174554d4b95a654951043f778a5f81d1a7f00c844d5028b6cf16fd WatchSource:0}: Error finding container fa3afd0542174554d4b95a654951043f778a5f81d1a7f00c844d5028b6cf16fd: Status 404 returned error can't find the container with id fa3afd0542174554d4b95a654951043f778a5f81d1a7f00c844d5028b6cf16fd Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.070640 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.071704 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:46:48 crc kubenswrapper[4756]: E1124 12:46:48.168656 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52caf4d5_4b74_438c_81cf_6b084ba79352.slice/crio-3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.490435 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1435c5ee-2d2a-4df3-8ddf-997d77314458" path="/var/lib/kubelet/pods/1435c5ee-2d2a-4df3-8ddf-997d77314458/volumes" Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.863731 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerStarted","Data":"fa3afd0542174554d4b95a654951043f778a5f81d1a7f00c844d5028b6cf16fd"} Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.868271 4756 generic.go:334] "Generic (PLEG): container finished" podID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerID="3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1" exitCode=143 Nov 24 12:46:48 crc kubenswrapper[4756]: I1124 12:46:48.868315 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerDied","Data":"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1"} Nov 24 12:46:49 crc kubenswrapper[4756]: I1124 12:46:49.719567 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:46:49 crc kubenswrapper[4756]: I1124 12:46:49.882001 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerStarted","Data":"bf96a3ab25dc6968caa7fc51353102af90c497900239b0b35e4f13d6e336fd63"} Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.426511 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526499 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526551 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526636 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526658 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526710 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lslpx\" (UniqueName: \"kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.526981 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.527019 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs\") pod \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\" (UID: \"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4\") " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.528852 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs" (OuterVolumeSpecName: "logs") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.529119 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.534359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts" (OuterVolumeSpecName: "scripts") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.534395 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx" (OuterVolumeSpecName: "kube-api-access-lslpx") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "kube-api-access-lslpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.546069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.577877 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.604319 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data" (OuterVolumeSpecName: "config-data") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630221 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lslpx\" (UniqueName: \"kubernetes.io/projected/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-kube-api-access-lslpx\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630473 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630549 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630642 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630740 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630817 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.630875 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.632099 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" (UID: "da0f06ac-a3f8-48b3-83a9-a3df94ece3b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.652713 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.732463 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.732707 4756 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.897422 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerStarted","Data":"d01c3eecd22a1ce44c3a7016215e0e8ef16d89424d8f1485da958164a93d0035"} Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.897479 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerStarted","Data":"ae98f008f0de989f6e0438a0f7f836e8518af54d721e7bb1ebf274dc5af4524e"} Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.900790 4756 generic.go:334] "Generic (PLEG): container finished" podID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerID="52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71" exitCode=0 Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.900821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerDied","Data":"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71"} Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.900841 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"da0f06ac-a3f8-48b3-83a9-a3df94ece3b4","Type":"ContainerDied","Data":"088d4c7b1f64cc74b39405e8e8df3992e2c0e9f2f9e810b4ab9615dc61b68b4b"} Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.900870 4756 scope.go:117] "RemoveContainer" containerID="52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.901035 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.948915 4756 scope.go:117] "RemoveContainer" containerID="7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.958571 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.969758 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.977546 4756 scope.go:117] "RemoveContainer" containerID="52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71" Nov 24 12:46:50 crc kubenswrapper[4756]: E1124 12:46:50.978099 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71\": container with ID starting with 52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71 not found: ID does not exist" containerID="52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.978150 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71"} err="failed to get container status \"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71\": rpc error: code = NotFound desc = could not find container \"52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71\": container with ID starting with 52f01a7df47bbf6998c56d996cd1c23b85f428f4c41baa361084785acadd6d71 not found: ID does not exist" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.978193 4756 scope.go:117] "RemoveContainer" containerID="7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934" Nov 24 12:46:50 crc kubenswrapper[4756]: E1124 12:46:50.981061 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934\": container with ID starting with 7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934 not found: ID does not exist" containerID="7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.981104 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934"} err="failed to get container status \"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934\": rpc error: code = NotFound desc = could not find container \"7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934\": container with ID starting with 7914416f46e4e4bdb8c96af316048967ac7de60df66418a5f589b54ddd148934 not found: ID does not exist" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.985607 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:50 crc kubenswrapper[4756]: E1124 12:46:50.986060 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-log" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.986077 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-log" Nov 24 12:46:50 crc kubenswrapper[4756]: E1124 12:46:50.986101 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-httpd" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.986107 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-httpd" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.986305 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-httpd" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.986333 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" containerName="glance-log" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.987439 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.990745 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.991322 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 12:46:50 crc kubenswrapper[4756]: I1124 12:46:50.995011 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142006 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142311 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142340 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tq96\" (UniqueName: \"kubernetes.io/projected/505721db-c67e-42b6-b508-11cd950bc272-kube-api-access-9tq96\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142355 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142388 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142432 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-config-data\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142460 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-scripts\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.142484 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-logs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-config-data\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-scripts\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-logs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244662 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244699 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tq96\" (UniqueName: \"kubernetes.io/projected/505721db-c67e-42b6-b508-11cd950bc272-kube-api-access-9tq96\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244726 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.244773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.245114 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-logs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.245295 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.248892 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-config-data\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.249097 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-scripts\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.249109 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.249373 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/505721db-c67e-42b6-b508-11cd950bc272-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.263111 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/505721db-c67e-42b6-b508-11cd950bc272-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.267105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tq96\" (UniqueName: \"kubernetes.io/projected/505721db-c67e-42b6-b508-11cd950bc272-kube-api-access-9tq96\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.287014 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"505721db-c67e-42b6-b508-11cd950bc272\") " pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.310527 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.871132 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.932734 4756 generic.go:334] "Generic (PLEG): container finished" podID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerID="98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7" exitCode=0 Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.932782 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerDied","Data":"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7"} Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.933006 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"52caf4d5-4b74-438c-81cf-6b084ba79352","Type":"ContainerDied","Data":"a608118f943333494da942af524833d7e86f3bd7e3a6fbef0866d9f658f31cca"} Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.933029 4756 scope.go:117] "RemoveContainer" containerID="98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.932799 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958205 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpsr\" (UniqueName: \"kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958284 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958310 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958346 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958392 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958420 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958557 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.958664 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data\") pod \"52caf4d5-4b74-438c-81cf-6b084ba79352\" (UID: \"52caf4d5-4b74-438c-81cf-6b084ba79352\") " Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.961464 4756 scope.go:117] "RemoveContainer" containerID="3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.961600 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs" (OuterVolumeSpecName: "logs") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.966147 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.969569 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.974527 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts" (OuterVolumeSpecName: "scripts") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:51 crc kubenswrapper[4756]: I1124 12:46:51.979136 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr" (OuterVolumeSpecName: "kube-api-access-wrpsr") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "kube-api-access-wrpsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.022285 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065078 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrpsr\" (UniqueName: \"kubernetes.io/projected/52caf4d5-4b74-438c-81cf-6b084ba79352-kube-api-access-wrpsr\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065134 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065144 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065181 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065194 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52caf4d5-4b74-438c-81cf-6b084ba79352-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.065217 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.078264 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.082114 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.097117 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data" (OuterVolumeSpecName: "config-data") pod "52caf4d5-4b74-438c-81cf-6b084ba79352" (UID: "52caf4d5-4b74-438c-81cf-6b084ba79352"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.105106 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.163368 4756 scope.go:117] "RemoveContainer" containerID="98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7" Nov 24 12:46:52 crc kubenswrapper[4756]: E1124 12:46:52.163891 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7\": container with ID starting with 98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7 not found: ID does not exist" containerID="98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.163995 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7"} err="failed to get container status \"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7\": rpc error: code = NotFound desc = could not find container \"98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7\": container with ID starting with 98b5e437274c079fd823c37951926ea990e517ac03d50b325980caec0ffc4db7 not found: ID does not exist" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.164077 4756 scope.go:117] "RemoveContainer" containerID="3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1" Nov 24 12:46:52 crc kubenswrapper[4756]: E1124 12:46:52.164606 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1\": container with ID starting with 3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1 not found: ID does not exist" containerID="3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.164651 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1"} err="failed to get container status \"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1\": rpc error: code = NotFound desc = could not find container \"3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1\": container with ID starting with 3090b73ea672a1cc3d511b844f0f4f7bb0b7905fc2dfc355f146b897c6783af1 not found: ID does not exist" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.167337 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.167387 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.167403 4756 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52caf4d5-4b74-438c-81cf-6b084ba79352-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.268299 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.292555 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.356045 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:52 crc kubenswrapper[4756]: E1124 12:46:52.356560 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-httpd" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.356584 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-httpd" Nov 24 12:46:52 crc kubenswrapper[4756]: E1124 12:46:52.356612 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-log" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.356621 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-log" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.356867 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-httpd" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.356896 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" containerName="glance-log" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.358308 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.361867 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.366364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.390226 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478344 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478404 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478444 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs87k\" (UniqueName: \"kubernetes.io/projected/3add353c-985b-4ed2-9bcf-a64e03c5479a-kube-api-access-zs87k\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478517 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-logs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.478712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.492291 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52caf4d5-4b74-438c-81cf-6b084ba79352" path="/var/lib/kubelet/pods/52caf4d5-4b74-438c-81cf-6b084ba79352/volumes" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.493481 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0f06ac-a3f8-48b3-83a9-a3df94ece3b4" path="/var/lib/kubelet/pods/da0f06ac-a3f8-48b3-83a9-a3df94ece3b4/volumes" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.580738 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs87k\" (UniqueName: \"kubernetes.io/projected/3add353c-985b-4ed2-9bcf-a64e03c5479a-kube-api-access-zs87k\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.580818 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-logs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.580871 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581023 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581195 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581236 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581194 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581783 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.581899 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3add353c-985b-4ed2-9bcf-a64e03c5479a-logs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.586541 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.589753 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.590424 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.592703 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3add353c-985b-4ed2-9bcf-a64e03c5479a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.606490 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs87k\" (UniqueName: \"kubernetes.io/projected/3add353c-985b-4ed2-9bcf-a64e03c5479a-kube-api-access-zs87k\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.623712 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3add353c-985b-4ed2-9bcf-a64e03c5479a\") " pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.676834 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.980970 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerStarted","Data":"9b924522330654d816f6f074251c1bbb1969110ad0285ec95dbf80350b24a51e"} Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.981476 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-central-agent" containerID="cri-o://bf96a3ab25dc6968caa7fc51353102af90c497900239b0b35e4f13d6e336fd63" gracePeriod=30 Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.981923 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.983075 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="proxy-httpd" containerID="cri-o://9b924522330654d816f6f074251c1bbb1969110ad0285ec95dbf80350b24a51e" gracePeriod=30 Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.983191 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="sg-core" containerID="cri-o://d01c3eecd22a1ce44c3a7016215e0e8ef16d89424d8f1485da958164a93d0035" gracePeriod=30 Nov 24 12:46:52 crc kubenswrapper[4756]: I1124 12:46:52.983248 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-notification-agent" containerID="cri-o://ae98f008f0de989f6e0438a0f7f836e8518af54d721e7bb1ebf274dc5af4524e" gracePeriod=30 Nov 24 12:46:53 crc kubenswrapper[4756]: I1124 12:46:53.013296 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"505721db-c67e-42b6-b508-11cd950bc272","Type":"ContainerStarted","Data":"6be6431ad467828813e5bddac2293d05319d8fd3c0b02f7837fa7999586cd720"} Nov 24 12:46:53 crc kubenswrapper[4756]: I1124 12:46:53.013356 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"505721db-c67e-42b6-b508-11cd950bc272","Type":"ContainerStarted","Data":"9566a7a74c62e685756c4cd0184510c183a85aba4b46eb7da1ebae100aef7cc9"} Nov 24 12:46:53 crc kubenswrapper[4756]: I1124 12:46:53.022027 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.373057746 podStartE2EDuration="6.022003109s" podCreationTimestamp="2025-11-24 12:46:47 +0000 UTC" firstStartedPulling="2025-11-24 12:46:48.071420409 +0000 UTC m=+1140.428934551" lastFinishedPulling="2025-11-24 12:46:51.720365772 +0000 UTC m=+1144.077879914" observedRunningTime="2025-11-24 12:46:53.005258426 +0000 UTC m=+1145.362772578" watchObservedRunningTime="2025-11-24 12:46:53.022003109 +0000 UTC m=+1145.379517251" Nov 24 12:46:53 crc kubenswrapper[4756]: I1124 12:46:53.297532 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.030001 4756 generic.go:334] "Generic (PLEG): container finished" podID="a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" containerID="f9cdafc88327978245679d7b1fb94759b02f09e765a19779a0c43261a7192ff8" exitCode=0 Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.030094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" event={"ID":"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8","Type":"ContainerDied","Data":"f9cdafc88327978245679d7b1fb94759b02f09e765a19779a0c43261a7192ff8"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040178 4756 generic.go:334] "Generic (PLEG): container finished" podID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerID="9b924522330654d816f6f074251c1bbb1969110ad0285ec95dbf80350b24a51e" exitCode=0 Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040219 4756 generic.go:334] "Generic (PLEG): container finished" podID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerID="d01c3eecd22a1ce44c3a7016215e0e8ef16d89424d8f1485da958164a93d0035" exitCode=2 Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040190 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerDied","Data":"9b924522330654d816f6f074251c1bbb1969110ad0285ec95dbf80350b24a51e"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040280 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerDied","Data":"d01c3eecd22a1ce44c3a7016215e0e8ef16d89424d8f1485da958164a93d0035"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040308 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerDied","Data":"ae98f008f0de989f6e0438a0f7f836e8518af54d721e7bb1ebf274dc5af4524e"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.040232 4756 generic.go:334] "Generic (PLEG): container finished" podID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerID="ae98f008f0de989f6e0438a0f7f836e8518af54d721e7bb1ebf274dc5af4524e" exitCode=0 Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.044832 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3add353c-985b-4ed2-9bcf-a64e03c5479a","Type":"ContainerStarted","Data":"26ae7ae91ed10f461c4740050cd2f39db0e5800a40951953364aec694cd5b420"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.044888 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3add353c-985b-4ed2-9bcf-a64e03c5479a","Type":"ContainerStarted","Data":"dbbf0a0463edd07722ea2a2981b5b26482003b88a11fc39a3ce782b4f32ac82f"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.047996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"505721db-c67e-42b6-b508-11cd950bc272","Type":"ContainerStarted","Data":"94caaa7bc218652c98582ae617c4046763a3cb5f11522392725b0e63cbbf886d"} Nov 24 12:46:54 crc kubenswrapper[4756]: I1124 12:46:54.080722 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.08069959 podStartE2EDuration="4.08069959s" podCreationTimestamp="2025-11-24 12:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:54.073506236 +0000 UTC m=+1146.431020398" watchObservedRunningTime="2025-11-24 12:46:54.08069959 +0000 UTC m=+1146.438213732" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.062259 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3add353c-985b-4ed2-9bcf-a64e03c5479a","Type":"ContainerStarted","Data":"5a16a674da81c6364720aaf4187678ec88342a3a48d3b0b055acc12f05872592"} Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.085792 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.085770853 podStartE2EDuration="3.085770853s" podCreationTimestamp="2025-11-24 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:55.083763238 +0000 UTC m=+1147.441277400" watchObservedRunningTime="2025-11-24 12:46:55.085770853 +0000 UTC m=+1147.443284995" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.466961 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.597439 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle\") pod \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.597568 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data\") pod \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.597787 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhs6t\" (UniqueName: \"kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t\") pod \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.597873 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts\") pod \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\" (UID: \"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8\") " Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.603864 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts" (OuterVolumeSpecName: "scripts") pod "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" (UID: "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.603874 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t" (OuterVolumeSpecName: "kube-api-access-fhs6t") pod "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" (UID: "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8"). InnerVolumeSpecName "kube-api-access-fhs6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.627992 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" (UID: "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.639009 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data" (OuterVolumeSpecName: "config-data") pod "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" (UID: "a77ddfde-c76b-4fea-b0a6-fcd470aa87a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.701418 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.701496 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhs6t\" (UniqueName: \"kubernetes.io/projected/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-kube-api-access-fhs6t\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.701521 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:55 crc kubenswrapper[4756]: I1124 12:46:55.701539 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.086977 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.086968 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mxm8r" event={"ID":"a77ddfde-c76b-4fea-b0a6-fcd470aa87a8","Type":"ContainerDied","Data":"a8f185ca9998fccb9f7730616e1e41727539fdd5927355d9a6f6118a682dfed4"} Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.087042 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f185ca9998fccb9f7730616e1e41727539fdd5927355d9a6f6118a682dfed4" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.180203 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:46:56 crc kubenswrapper[4756]: E1124 12:46:56.180797 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" containerName="nova-cell0-conductor-db-sync" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.180819 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" containerName="nova-cell0-conductor-db-sync" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.181057 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" containerName="nova-cell0-conductor-db-sync" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.182800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.186294 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cd4xw" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.186512 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.203807 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.312021 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.312203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.312311 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5fp\" (UniqueName: \"kubernetes.io/projected/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-kube-api-access-lj5fp\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.414077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.414199 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5fp\" (UniqueName: \"kubernetes.io/projected/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-kube-api-access-lj5fp\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.414238 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.419708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.427935 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.435037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5fp\" (UniqueName: \"kubernetes.io/projected/c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3-kube-api-access-lj5fp\") pod \"nova-cell0-conductor-0\" (UID: \"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3\") " pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.510843 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:56 crc kubenswrapper[4756]: I1124 12:46:56.994697 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 12:46:57 crc kubenswrapper[4756]: I1124 12:46:57.120710 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3","Type":"ContainerStarted","Data":"e7677c7378ebfc82010d2c34a1c42323af9da91d4950423686bd6bd6c22f9208"} Nov 24 12:46:58 crc kubenswrapper[4756]: I1124 12:46:58.134112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3","Type":"ContainerStarted","Data":"6c375e8b013dbcd44af539bb989b275cab64264812f7512bb41b533d8ed3f960"} Nov 24 12:46:58 crc kubenswrapper[4756]: I1124 12:46:58.135506 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 12:46:58 crc kubenswrapper[4756]: I1124 12:46:58.154117 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.154097194 podStartE2EDuration="2.154097194s" podCreationTimestamp="2025-11-24 12:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:46:58.151512914 +0000 UTC m=+1150.509027056" watchObservedRunningTime="2025-11-24 12:46:58.154097194 +0000 UTC m=+1150.511611336" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.147536 4756 generic.go:334] "Generic (PLEG): container finished" podID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerID="bf96a3ab25dc6968caa7fc51353102af90c497900239b0b35e4f13d6e336fd63" exitCode=0 Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.147610 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerDied","Data":"bf96a3ab25dc6968caa7fc51353102af90c497900239b0b35e4f13d6e336fd63"} Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.379546 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.485889 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq8jr\" (UniqueName: \"kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.485980 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.486028 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.486095 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.486176 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.486300 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.486344 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd\") pod \"7387748c-feab-4e70-96a1-a95254fb9ba9\" (UID: \"7387748c-feab-4e70-96a1-a95254fb9ba9\") " Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.487231 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.487525 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.497455 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts" (OuterVolumeSpecName: "scripts") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.497608 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr" (OuterVolumeSpecName: "kube-api-access-sq8jr") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "kube-api-access-sq8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.520880 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.581739 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589545 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq8jr\" (UniqueName: \"kubernetes.io/projected/7387748c-feab-4e70-96a1-a95254fb9ba9-kube-api-access-sq8jr\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589583 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589595 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589607 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589619 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.589631 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7387748c-feab-4e70-96a1-a95254fb9ba9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.604497 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data" (OuterVolumeSpecName: "config-data") pod "7387748c-feab-4e70-96a1-a95254fb9ba9" (UID: "7387748c-feab-4e70-96a1-a95254fb9ba9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:46:59 crc kubenswrapper[4756]: I1124 12:46:59.691222 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7387748c-feab-4e70-96a1-a95254fb9ba9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.161725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7387748c-feab-4e70-96a1-a95254fb9ba9","Type":"ContainerDied","Data":"fa3afd0542174554d4b95a654951043f778a5f81d1a7f00c844d5028b6cf16fd"} Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.161782 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.161827 4756 scope.go:117] "RemoveContainer" containerID="9b924522330654d816f6f074251c1bbb1969110ad0285ec95dbf80350b24a51e" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.188674 4756 scope.go:117] "RemoveContainer" containerID="d01c3eecd22a1ce44c3a7016215e0e8ef16d89424d8f1485da958164a93d0035" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.222598 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.238223 4756 scope.go:117] "RemoveContainer" containerID="ae98f008f0de989f6e0438a0f7f836e8518af54d721e7bb1ebf274dc5af4524e" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.268949 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.281230 4756 scope.go:117] "RemoveContainer" containerID="bf96a3ab25dc6968caa7fc51353102af90c497900239b0b35e4f13d6e336fd63" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.282997 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:00 crc kubenswrapper[4756]: E1124 12:47:00.283631 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="proxy-httpd" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283649 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="proxy-httpd" Nov 24 12:47:00 crc kubenswrapper[4756]: E1124 12:47:00.283663 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-central-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283669 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-central-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: E1124 12:47:00.283692 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-notification-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283699 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-notification-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: E1124 12:47:00.283730 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="sg-core" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283738 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="sg-core" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283944 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-notification-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283960 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="ceilometer-central-agent" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283977 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="proxy-httpd" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.283986 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" containerName="sg-core" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.286528 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.290693 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.291307 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.292424 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.409538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410121 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snf4\" (UniqueName: \"kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410324 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410519 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410635 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410746 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.410838 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.488854 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7387748c-feab-4e70-96a1-a95254fb9ba9" path="/var/lib/kubelet/pods/7387748c-feab-4e70-96a1-a95254fb9ba9/volumes" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.512525 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.512594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snf4\" (UniqueName: \"kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.513331 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.513414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.513779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.513878 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.513922 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.514019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.514107 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.518493 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.519052 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.531077 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.531364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.534466 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snf4\" (UniqueName: \"kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4\") pod \"ceilometer-0\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " pod="openstack/ceilometer-0" Nov 24 12:47:00 crc kubenswrapper[4756]: I1124 12:47:00.614076 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.113756 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.173097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerStarted","Data":"6c4b511064e8f3ccb3518e83e20d35801bc9fb2aadb5d6e1264ba98d8e05d302"} Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.311479 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.311525 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.352644 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:47:01 crc kubenswrapper[4756]: I1124 12:47:01.384085 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.193579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerStarted","Data":"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910"} Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.194149 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.194186 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.677282 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.677864 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.712579 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:02 crc kubenswrapper[4756]: I1124 12:47:02.726772 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:03 crc kubenswrapper[4756]: I1124 12:47:03.204727 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerStarted","Data":"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34"} Nov 24 12:47:03 crc kubenswrapper[4756]: I1124 12:47:03.205563 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:03 crc kubenswrapper[4756]: I1124 12:47:03.205591 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:03 crc kubenswrapper[4756]: I1124 12:47:03.480871 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:47:03 crc kubenswrapper[4756]: I1124 12:47:03.480962 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:47:04 crc kubenswrapper[4756]: I1124 12:47:04.219080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerStarted","Data":"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943"} Nov 24 12:47:04 crc kubenswrapper[4756]: I1124 12:47:04.219527 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:47:04 crc kubenswrapper[4756]: I1124 12:47:04.219542 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:47:04 crc kubenswrapper[4756]: I1124 12:47:04.433054 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:47:04 crc kubenswrapper[4756]: I1124 12:47:04.440613 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 12:47:05 crc kubenswrapper[4756]: I1124 12:47:05.232944 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerStarted","Data":"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a"} Nov 24 12:47:05 crc kubenswrapper[4756]: I1124 12:47:05.262464 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8905358319999999 podStartE2EDuration="5.262445088s" podCreationTimestamp="2025-11-24 12:47:00 +0000 UTC" firstStartedPulling="2025-11-24 12:47:01.11934881 +0000 UTC m=+1153.476862952" lastFinishedPulling="2025-11-24 12:47:04.491258066 +0000 UTC m=+1156.848772208" observedRunningTime="2025-11-24 12:47:05.259776026 +0000 UTC m=+1157.617290188" watchObservedRunningTime="2025-11-24 12:47:05.262445088 +0000 UTC m=+1157.619959240" Nov 24 12:47:05 crc kubenswrapper[4756]: I1124 12:47:05.623173 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:05 crc kubenswrapper[4756]: I1124 12:47:05.623676 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 12:47:05 crc kubenswrapper[4756]: I1124 12:47:05.649755 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 12:47:06 crc kubenswrapper[4756]: I1124 12:47:06.244427 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:47:06 crc kubenswrapper[4756]: I1124 12:47:06.549323 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.196020 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4vkkv"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.205033 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.209848 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.210114 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.247239 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vkkv"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.293449 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.293550 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.293702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw589\" (UniqueName: \"kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.293824 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.396961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.397053 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.397187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw589\" (UniqueName: \"kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.397275 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.417231 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.425127 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.431708 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.432914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.441977 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.442983 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.453875 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.483449 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.491343 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.494223 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.497021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw589\" (UniqueName: \"kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589\") pod \"nova-cell0-cell-mapping-4vkkv\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.505535 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.505605 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.505670 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkfg\" (UniqueName: \"kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.505703 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.546173 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.560191 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.579601 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.581329 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.589777 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607510 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607580 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607681 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607701 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmvj\" (UniqueName: \"kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607720 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nkfg\" (UniqueName: \"kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607754 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.607783 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.616045 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.625263 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.631824 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.646019 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.661027 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nkfg\" (UniqueName: \"kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg\") pod \"nova-metadata-0\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.707634 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.710802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713274 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktmvj\" (UniqueName: \"kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713508 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713562 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.713619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgb78\" (UniqueName: \"kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.715848 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.718520 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.743629 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.754811 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.755420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktmvj\" (UniqueName: \"kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj\") pod \"nova-api-0\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.772045 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.816790 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.816860 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.816918 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.816956 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.816977 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.817028 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqxs5\" (UniqueName: \"kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.817076 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgb78\" (UniqueName: \"kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.817117 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.817142 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.823477 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.824254 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.835253 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.868938 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgb78\" (UniqueName: \"kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78\") pod \"nova-cell1-novncproxy-0\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.879783 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.881650 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.886993 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918526 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918628 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918731 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918750 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.918786 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqxs5\" (UniqueName: \"kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.920098 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.920919 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.921432 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.922561 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.923028 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.941851 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:07 crc kubenswrapper[4756]: I1124 12:47:07.989077 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqxs5\" (UniqueName: \"kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5\") pod \"dnsmasq-dns-bccf8f775-lfz22\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.024275 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.024343 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.024425 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppfgp\" (UniqueName: \"kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.096968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.115908 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.127392 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppfgp\" (UniqueName: \"kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.127510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.127547 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.151012 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.168498 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.184471 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppfgp\" (UniqueName: \"kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp\") pod \"nova-scheduler-0\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.290496 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.562086 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.868998 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vkkv"] Nov 24 12:47:08 crc kubenswrapper[4756]: W1124 12:47:08.923808 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3f9be2_b906_40ba_85bb_dfb3151eb864.slice/crio-9b944fe19b7f17bbfd8aaf9ad9e3eebcece6e5c0b173c466d337fc52ec04b779 WatchSource:0}: Error finding container 9b944fe19b7f17bbfd8aaf9ad9e3eebcece6e5c0b173c466d337fc52ec04b779: Status 404 returned error can't find the container with id 9b944fe19b7f17bbfd8aaf9ad9e3eebcece6e5c0b173c466d337fc52ec04b779 Nov 24 12:47:08 crc kubenswrapper[4756]: I1124 12:47:08.942814 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.038322 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-877px"] Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.039934 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.046229 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.046481 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.060088 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.066999 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.067060 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.067603 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2fs\" (UniqueName: \"kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.079256 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-877px"] Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.096265 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.107019 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.170299 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.170340 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.171671 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp2fs\" (UniqueName: \"kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.171823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.177304 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.181457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.189810 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.190960 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp2fs\" (UniqueName: \"kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs\") pod \"nova-cell1-conductor-db-sync-877px\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.330667 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vkkv" event={"ID":"e04cac76-134f-4232-aaad-c16ec2ef43dc","Type":"ContainerStarted","Data":"deee3e16877530cc76e268449cf02a0a279b71c5b59a457b14aea9109c02f3a9"} Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.335714 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerStarted","Data":"9b944fe19b7f17bbfd8aaf9ad9e3eebcece6e5c0b173c466d337fc52ec04b779"} Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.338645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"913f11ff-7204-4059-a891-1d40b66f2b82","Type":"ContainerStarted","Data":"642b39d3179ea0efd6a555f9ff78cbd865874ad1d2aae43ff3da741aa9b0f2d6"} Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.341962 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f847c6f9-2c3a-4846-bc94-09a7685f3387","Type":"ContainerStarted","Data":"672bcf61ba425a7a5579b7eacba3467dc08278568fd4fef4e07d7fea55104b0a"} Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.344626 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerStarted","Data":"6b27c4e6f58bc2333eb145c9b319bc821d9217d737ae4f0ada62a9d57056ae72"} Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.435294 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:09 crc kubenswrapper[4756]: I1124 12:47:09.668794 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.042248 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-877px"] Nov 24 12:47:10 crc kubenswrapper[4756]: W1124 12:47:10.056079 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f3e378_99a3_4ef7_b4a2_15efaa919862.slice/crio-c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af WatchSource:0}: Error finding container c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af: Status 404 returned error can't find the container with id c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.371941 4756 generic.go:334] "Generic (PLEG): container finished" podID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerID="a7d14867dea67d3d6891a305d2fc8810cf67746628d5afe7e630c687daa4f02a" exitCode=0 Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.372034 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" event={"ID":"23a10485-152b-4bf5-bb3d-49fe345f390e","Type":"ContainerDied","Data":"a7d14867dea67d3d6891a305d2fc8810cf67746628d5afe7e630c687daa4f02a"} Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.372069 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" event={"ID":"23a10485-152b-4bf5-bb3d-49fe345f390e","Type":"ContainerStarted","Data":"30c957b4daa394124a8140c9657be2fff414aec41b829179e49e4e8465496324"} Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.378799 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vkkv" event={"ID":"e04cac76-134f-4232-aaad-c16ec2ef43dc","Type":"ContainerStarted","Data":"b2899f41db0e09f50eb9449220402c47b929894868609b21451cfe902e4a0d44"} Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.414851 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-877px" event={"ID":"07f3e378-99a3-4ef7-b4a2-15efaa919862","Type":"ContainerStarted","Data":"abd06601e0e008f8e6bb0a9a5b28d77bddab957f254155be667d2c760f120413"} Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.414909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-877px" event={"ID":"07f3e378-99a3-4ef7-b4a2-15efaa919862","Type":"ContainerStarted","Data":"c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af"} Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.417247 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4vkkv" podStartSLOduration=3.417221126 podStartE2EDuration="3.417221126s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:10.411801479 +0000 UTC m=+1162.769315631" watchObservedRunningTime="2025-11-24 12:47:10.417221126 +0000 UTC m=+1162.774735268" Nov 24 12:47:10 crc kubenswrapper[4756]: I1124 12:47:10.445638 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-877px" podStartSLOduration=2.445610453 podStartE2EDuration="2.445610453s" podCreationTimestamp="2025-11-24 12:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:10.432679954 +0000 UTC m=+1162.790194106" watchObservedRunningTime="2025-11-24 12:47:10.445610453 +0000 UTC m=+1162.803124605" Nov 24 12:47:11 crc kubenswrapper[4756]: I1124 12:47:11.438071 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" event={"ID":"23a10485-152b-4bf5-bb3d-49fe345f390e","Type":"ContainerStarted","Data":"ddbfb3777c6aec779b6c09f89bddd566114d8a482dc656a6ce3c308d7c9e1079"} Nov 24 12:47:11 crc kubenswrapper[4756]: I1124 12:47:11.460856 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" podStartSLOduration=4.46083604 podStartE2EDuration="4.46083604s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:11.456106502 +0000 UTC m=+1163.813620644" watchObservedRunningTime="2025-11-24 12:47:11.46083604 +0000 UTC m=+1163.818350182" Nov 24 12:47:11 crc kubenswrapper[4756]: I1124 12:47:11.636932 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:11 crc kubenswrapper[4756]: I1124 12:47:11.665538 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:12 crc kubenswrapper[4756]: I1124 12:47:12.446685 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.464903 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"913f11ff-7204-4059-a891-1d40b66f2b82","Type":"ContainerStarted","Data":"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c"} Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.469992 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f847c6f9-2c3a-4846-bc94-09a7685f3387","Type":"ContainerStarted","Data":"821c6ed53a1b520b2a12c1a84cfb2acf1f91b64b82600b6264e0af2a98b4fbbd"} Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.470123 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f847c6f9-2c3a-4846-bc94-09a7685f3387" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://821c6ed53a1b520b2a12c1a84cfb2acf1f91b64b82600b6264e0af2a98b4fbbd" gracePeriod=30 Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.474508 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerStarted","Data":"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a"} Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.481287 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerStarted","Data":"249c4463f63fbc1d73e4608fc6f70ac8ecb1e248650ec203e64463d499e82881"} Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.494069 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.645723907 podStartE2EDuration="6.494037868s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="2025-11-24 12:47:09.207822072 +0000 UTC m=+1161.565336214" lastFinishedPulling="2025-11-24 12:47:13.056136033 +0000 UTC m=+1165.413650175" observedRunningTime="2025-11-24 12:47:13.486171335 +0000 UTC m=+1165.843685497" watchObservedRunningTime="2025-11-24 12:47:13.494037868 +0000 UTC m=+1165.851552010" Nov 24 12:47:13 crc kubenswrapper[4756]: I1124 12:47:13.542042 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.672908752 podStartE2EDuration="6.542021145s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="2025-11-24 12:47:09.203986159 +0000 UTC m=+1161.561500301" lastFinishedPulling="2025-11-24 12:47:13.073098552 +0000 UTC m=+1165.430612694" observedRunningTime="2025-11-24 12:47:13.503206026 +0000 UTC m=+1165.860720168" watchObservedRunningTime="2025-11-24 12:47:13.542021145 +0000 UTC m=+1165.899535287" Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.495835 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerStarted","Data":"25fcc1fccb64acfe6ba157cf9b40b1b8017d1db436d6cfc968b645b43b96f1f8"} Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.499973 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerStarted","Data":"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e"} Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.499034 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-metadata" containerID="cri-o://eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" gracePeriod=30 Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.498667 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-log" containerID="cri-o://b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" gracePeriod=30 Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.532701 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.42924026 podStartE2EDuration="7.532671676s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="2025-11-24 12:47:08.975896504 +0000 UTC m=+1161.333410646" lastFinishedPulling="2025-11-24 12:47:13.0793279 +0000 UTC m=+1165.436842062" observedRunningTime="2025-11-24 12:47:14.51764164 +0000 UTC m=+1166.875155782" watchObservedRunningTime="2025-11-24 12:47:14.532671676 +0000 UTC m=+1166.890185818" Nov 24 12:47:14 crc kubenswrapper[4756]: I1124 12:47:14.555598 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.142201853 podStartE2EDuration="7.555572575s" podCreationTimestamp="2025-11-24 12:47:07 +0000 UTC" firstStartedPulling="2025-11-24 12:47:08.663117872 +0000 UTC m=+1161.020632014" lastFinishedPulling="2025-11-24 12:47:13.076488584 +0000 UTC m=+1165.434002736" observedRunningTime="2025-11-24 12:47:14.548400842 +0000 UTC m=+1166.905914984" watchObservedRunningTime="2025-11-24 12:47:14.555572575 +0000 UTC m=+1166.913086717" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.112062 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.273660 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data\") pod \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.273794 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs\") pod \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.273984 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle\") pod \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.274049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nkfg\" (UniqueName: \"kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg\") pod \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\" (UID: \"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5\") " Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.274709 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs" (OuterVolumeSpecName: "logs") pod "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" (UID: "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.275121 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.281720 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg" (OuterVolumeSpecName: "kube-api-access-4nkfg") pod "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" (UID: "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5"). InnerVolumeSpecName "kube-api-access-4nkfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.312349 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data" (OuterVolumeSpecName: "config-data") pod "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" (UID: "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.332954 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" (UID: "0f033c7f-6613-4ada-9d3b-afa8bbf42ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.377758 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.377802 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nkfg\" (UniqueName: \"kubernetes.io/projected/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-kube-api-access-4nkfg\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.377869 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512273 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerID="eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" exitCode=0 Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512313 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerID="b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" exitCode=143 Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512348 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512376 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerDied","Data":"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e"} Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512425 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerDied","Data":"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a"} Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512437 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f033c7f-6613-4ada-9d3b-afa8bbf42ed5","Type":"ContainerDied","Data":"6b27c4e6f58bc2333eb145c9b319bc821d9217d737ae4f0ada62a9d57056ae72"} Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.512453 4756 scope.go:117] "RemoveContainer" containerID="eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.563905 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.575446 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.597421 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:15 crc kubenswrapper[4756]: E1124 12:47:15.597924 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-log" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.597952 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-log" Nov 24 12:47:15 crc kubenswrapper[4756]: E1124 12:47:15.597980 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-metadata" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.597989 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-metadata" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.598338 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-metadata" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.598375 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" containerName="nova-metadata-log" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.599685 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.604220 4756 scope.go:117] "RemoveContainer" containerID="b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.604710 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.605353 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.608309 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.650887 4756 scope.go:117] "RemoveContainer" containerID="eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" Nov 24 12:47:15 crc kubenswrapper[4756]: E1124 12:47:15.651613 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e\": container with ID starting with eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e not found: ID does not exist" containerID="eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.651653 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e"} err="failed to get container status \"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e\": rpc error: code = NotFound desc = could not find container \"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e\": container with ID starting with eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e not found: ID does not exist" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.651677 4756 scope.go:117] "RemoveContainer" containerID="b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" Nov 24 12:47:15 crc kubenswrapper[4756]: E1124 12:47:15.652422 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a\": container with ID starting with b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a not found: ID does not exist" containerID="b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.652442 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a"} err="failed to get container status \"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a\": rpc error: code = NotFound desc = could not find container \"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a\": container with ID starting with b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a not found: ID does not exist" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.652455 4756 scope.go:117] "RemoveContainer" containerID="eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.653280 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e"} err="failed to get container status \"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e\": rpc error: code = NotFound desc = could not find container \"eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e\": container with ID starting with eef3e0e9f9c77726073749233f286670aeb74b8924de6577636b10d9f210ed0e not found: ID does not exist" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.653478 4756 scope.go:117] "RemoveContainer" containerID="b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.654077 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a"} err="failed to get container status \"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a\": rpc error: code = NotFound desc = could not find container \"b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a\": container with ID starting with b0a7016cafd4e05725c8b476f72dec1088e5235e50b548ac03438405ffe41f8a not found: ID does not exist" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.686501 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.686961 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.687096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.687226 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.688055 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jtm\" (UniqueName: \"kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.790246 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.790785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.790923 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jtm\" (UniqueName: \"kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.790706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.791190 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.791356 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.796339 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.797395 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.798454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.812612 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jtm\" (UniqueName: \"kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm\") pod \"nova-metadata-0\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.933394 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:15 crc kubenswrapper[4756]: I1124 12:47:15.998458 4756 scope.go:117] "RemoveContainer" containerID="7fd036b3591765b2035ccd54797dd917ab3221ab5667330524e8bbb8c739fdb0" Nov 24 12:47:16 crc kubenswrapper[4756]: I1124 12:47:16.036581 4756 scope.go:117] "RemoveContainer" containerID="485793ec046b6776f96cbc23398e8283a3de1df826aafeadb8b93897484e9595" Nov 24 12:47:16 crc kubenswrapper[4756]: I1124 12:47:16.081015 4756 scope.go:117] "RemoveContainer" containerID="3fdd00d481e9ea7d0727617a7e954fa5f6f543dd3a0d37fefe699475fd748a18" Nov 24 12:47:16 crc kubenswrapper[4756]: I1124 12:47:16.395476 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:16 crc kubenswrapper[4756]: W1124 12:47:16.401720 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29e120a2_6637_4d03_a245_bca576627607.slice/crio-3e93dab8613f084ee9d75ca5082331fcab389d48da1d19dd1442cba0c7ba48de WatchSource:0}: Error finding container 3e93dab8613f084ee9d75ca5082331fcab389d48da1d19dd1442cba0c7ba48de: Status 404 returned error can't find the container with id 3e93dab8613f084ee9d75ca5082331fcab389d48da1d19dd1442cba0c7ba48de Nov 24 12:47:16 crc kubenswrapper[4756]: I1124 12:47:16.493757 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f033c7f-6613-4ada-9d3b-afa8bbf42ed5" path="/var/lib/kubelet/pods/0f033c7f-6613-4ada-9d3b-afa8bbf42ed5/volumes" Nov 24 12:47:16 crc kubenswrapper[4756]: I1124 12:47:16.527023 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerStarted","Data":"3e93dab8613f084ee9d75ca5082331fcab389d48da1d19dd1442cba0c7ba48de"} Nov 24 12:47:17 crc kubenswrapper[4756]: I1124 12:47:17.539600 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerStarted","Data":"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f"} Nov 24 12:47:17 crc kubenswrapper[4756]: I1124 12:47:17.539952 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerStarted","Data":"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb"} Nov 24 12:47:17 crc kubenswrapper[4756]: I1124 12:47:17.565696 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.565670893 podStartE2EDuration="2.565670893s" podCreationTimestamp="2025-11-24 12:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:17.560083502 +0000 UTC m=+1169.917597664" watchObservedRunningTime="2025-11-24 12:47:17.565670893 +0000 UTC m=+1169.923185035" Nov 24 12:47:17 crc kubenswrapper[4756]: I1124 12:47:17.776590 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:47:17 crc kubenswrapper[4756]: I1124 12:47:17.776917 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.097688 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.118352 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.216103 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.216444 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="dnsmasq-dns" containerID="cri-o://c092cf358a395b8ecbcb46bb0d74a39a0f0bf12c49195d48eb378ff6ea515098" gracePeriod=10 Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.291299 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.291359 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.334498 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.583453 4756 generic.go:334] "Generic (PLEG): container finished" podID="e04cac76-134f-4232-aaad-c16ec2ef43dc" containerID="b2899f41db0e09f50eb9449220402c47b929894868609b21451cfe902e4a0d44" exitCode=0 Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.583923 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vkkv" event={"ID":"e04cac76-134f-4232-aaad-c16ec2ef43dc","Type":"ContainerDied","Data":"b2899f41db0e09f50eb9449220402c47b929894868609b21451cfe902e4a0d44"} Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.601696 4756 generic.go:334] "Generic (PLEG): container finished" podID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerID="c092cf358a395b8ecbcb46bb0d74a39a0f0bf12c49195d48eb378ff6ea515098" exitCode=0 Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.602493 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" event={"ID":"fe242a81-41af-42a9-8934-34f7d0ef485b","Type":"ContainerDied","Data":"c092cf358a395b8ecbcb46bb0d74a39a0f0bf12c49195d48eb378ff6ea515098"} Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.653021 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.859479 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.860686 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:18 crc kubenswrapper[4756]: I1124 12:47:18.918668 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.104939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.105005 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.105031 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.105049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdzpp\" (UniqueName: \"kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.105066 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.105253 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb\") pod \"fe242a81-41af-42a9-8934-34f7d0ef485b\" (UID: \"fe242a81-41af-42a9-8934-34f7d0ef485b\") " Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.116917 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp" (OuterVolumeSpecName: "kube-api-access-fdzpp") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "kube-api-access-fdzpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.190343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config" (OuterVolumeSpecName: "config") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.207298 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdzpp\" (UniqueName: \"kubernetes.io/projected/fe242a81-41af-42a9-8934-34f7d0ef485b-kube-api-access-fdzpp\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.207333 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.240980 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.248267 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.255109 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.277508 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe242a81-41af-42a9-8934-34f7d0ef485b" (UID: "fe242a81-41af-42a9-8934-34f7d0ef485b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.309717 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.309756 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.309768 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.309777 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe242a81-41af-42a9-8934-34f7d0ef485b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.622528 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" event={"ID":"fe242a81-41af-42a9-8934-34f7d0ef485b","Type":"ContainerDied","Data":"0577e5d302338093fb07aafe3066b46b7720f6a7b4fe614629ff59e8da0f56e2"} Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.622581 4756 scope.go:117] "RemoveContainer" containerID="c092cf358a395b8ecbcb46bb0d74a39a0f0bf12c49195d48eb378ff6ea515098" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.622734 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-7nxdw" Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.628668 4756 generic.go:334] "Generic (PLEG): container finished" podID="07f3e378-99a3-4ef7-b4a2-15efaa919862" containerID="abd06601e0e008f8e6bb0a9a5b28d77bddab957f254155be667d2c760f120413" exitCode=0 Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.628902 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-877px" event={"ID":"07f3e378-99a3-4ef7-b4a2-15efaa919862","Type":"ContainerDied","Data":"abd06601e0e008f8e6bb0a9a5b28d77bddab957f254155be667d2c760f120413"} Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.678526 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.685580 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-7nxdw"] Nov 24 12:47:19 crc kubenswrapper[4756]: I1124 12:47:19.708622 4756 scope.go:117] "RemoveContainer" containerID="c65bafe6ef3e1115c1b26acc55ab9dee565a08d3f0572b393c6b6f89476317a2" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.122570 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.237336 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle\") pod \"e04cac76-134f-4232-aaad-c16ec2ef43dc\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.237419 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts\") pod \"e04cac76-134f-4232-aaad-c16ec2ef43dc\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.237591 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw589\" (UniqueName: \"kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589\") pod \"e04cac76-134f-4232-aaad-c16ec2ef43dc\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.237667 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data\") pod \"e04cac76-134f-4232-aaad-c16ec2ef43dc\" (UID: \"e04cac76-134f-4232-aaad-c16ec2ef43dc\") " Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.243773 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts" (OuterVolumeSpecName: "scripts") pod "e04cac76-134f-4232-aaad-c16ec2ef43dc" (UID: "e04cac76-134f-4232-aaad-c16ec2ef43dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.243847 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589" (OuterVolumeSpecName: "kube-api-access-vw589") pod "e04cac76-134f-4232-aaad-c16ec2ef43dc" (UID: "e04cac76-134f-4232-aaad-c16ec2ef43dc"). InnerVolumeSpecName "kube-api-access-vw589". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.276923 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e04cac76-134f-4232-aaad-c16ec2ef43dc" (UID: "e04cac76-134f-4232-aaad-c16ec2ef43dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.295791 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data" (OuterVolumeSpecName: "config-data") pod "e04cac76-134f-4232-aaad-c16ec2ef43dc" (UID: "e04cac76-134f-4232-aaad-c16ec2ef43dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.340557 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.340598 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.340609 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw589\" (UniqueName: \"kubernetes.io/projected/e04cac76-134f-4232-aaad-c16ec2ef43dc-kube-api-access-vw589\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.340620 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04cac76-134f-4232-aaad-c16ec2ef43dc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.493439 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" path="/var/lib/kubelet/pods/fe242a81-41af-42a9-8934-34f7d0ef485b/volumes" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.642591 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vkkv" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.643465 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vkkv" event={"ID":"e04cac76-134f-4232-aaad-c16ec2ef43dc","Type":"ContainerDied","Data":"deee3e16877530cc76e268449cf02a0a279b71c5b59a457b14aea9109c02f3a9"} Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.643514 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deee3e16877530cc76e268449cf02a0a279b71c5b59a457b14aea9109c02f3a9" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.869440 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.869982 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" containerName="nova-scheduler-scheduler" containerID="cri-o://36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" gracePeriod=30 Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.887333 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.887782 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-log" containerID="cri-o://249c4463f63fbc1d73e4608fc6f70ac8ecb1e248650ec203e64463d499e82881" gracePeriod=30 Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.888032 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-api" containerID="cri-o://25fcc1fccb64acfe6ba157cf9b40b1b8017d1db436d6cfc968b645b43b96f1f8" gracePeriod=30 Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.917629 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.917970 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-log" containerID="cri-o://7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" gracePeriod=30 Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.918730 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-metadata" containerID="cri-o://75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" gracePeriod=30 Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.934588 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:47:20 crc kubenswrapper[4756]: I1124 12:47:20.934654 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.224476 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.372957 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts\") pod \"07f3e378-99a3-4ef7-b4a2-15efaa919862\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.373038 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp2fs\" (UniqueName: \"kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs\") pod \"07f3e378-99a3-4ef7-b4a2-15efaa919862\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.373131 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data\") pod \"07f3e378-99a3-4ef7-b4a2-15efaa919862\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.373292 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle\") pod \"07f3e378-99a3-4ef7-b4a2-15efaa919862\" (UID: \"07f3e378-99a3-4ef7-b4a2-15efaa919862\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.394471 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs" (OuterVolumeSpecName: "kube-api-access-pp2fs") pod "07f3e378-99a3-4ef7-b4a2-15efaa919862" (UID: "07f3e378-99a3-4ef7-b4a2-15efaa919862"). InnerVolumeSpecName "kube-api-access-pp2fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.410448 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts" (OuterVolumeSpecName: "scripts") pod "07f3e378-99a3-4ef7-b4a2-15efaa919862" (UID: "07f3e378-99a3-4ef7-b4a2-15efaa919862"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.413508 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f3e378-99a3-4ef7-b4a2-15efaa919862" (UID: "07f3e378-99a3-4ef7-b4a2-15efaa919862"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.422837 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data" (OuterVolumeSpecName: "config-data") pod "07f3e378-99a3-4ef7-b4a2-15efaa919862" (UID: "07f3e378-99a3-4ef7-b4a2-15efaa919862"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.476061 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.476107 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.476120 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp2fs\" (UniqueName: \"kubernetes.io/projected/07f3e378-99a3-4ef7-b4a2-15efaa919862-kube-api-access-pp2fs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.476130 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f3e378-99a3-4ef7-b4a2-15efaa919862-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.544188 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.670329 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerID="249c4463f63fbc1d73e4608fc6f70ac8ecb1e248650ec203e64463d499e82881" exitCode=143 Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.670391 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerDied","Data":"249c4463f63fbc1d73e4608fc6f70ac8ecb1e248650ec203e64463d499e82881"} Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673482 4756 generic.go:334] "Generic (PLEG): container finished" podID="29e120a2-6637-4d03-a245-bca576627607" containerID="75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" exitCode=0 Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673505 4756 generic.go:334] "Generic (PLEG): container finished" podID="29e120a2-6637-4d03-a245-bca576627607" containerID="7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" exitCode=143 Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673553 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerDied","Data":"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f"} Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerDied","Data":"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb"} Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673592 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"29e120a2-6637-4d03-a245-bca576627607","Type":"ContainerDied","Data":"3e93dab8613f084ee9d75ca5082331fcab389d48da1d19dd1442cba0c7ba48de"} Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673609 4756 scope.go:117] "RemoveContainer" containerID="75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.673757 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.679147 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.679571 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7jtm\" (UniqueName: \"kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.679632 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.679665 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.679732 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.681614 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-877px" event={"ID":"07f3e378-99a3-4ef7-b4a2-15efaa919862","Type":"ContainerDied","Data":"c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af"} Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.681765 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99095755e21a5c9fb3ceb085fd9aa655d6b787f384b6643a6ba3438e41054af" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.681899 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-877px" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.683017 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs" (OuterVolumeSpecName: "logs") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.698546 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm" (OuterVolumeSpecName: "kube-api-access-c7jtm") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "kube-api-access-c7jtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.721189 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.754908 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755050 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data" (OuterVolumeSpecName: "config-data") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755704 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-log" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755727 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-log" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755741 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="init" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755748 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="init" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755784 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="dnsmasq-dns" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755795 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="dnsmasq-dns" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755809 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04cac76-134f-4232-aaad-c16ec2ef43dc" containerName="nova-manage" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755817 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04cac76-134f-4232-aaad-c16ec2ef43dc" containerName="nova-manage" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755826 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-metadata" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755833 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-metadata" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.755854 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f3e378-99a3-4ef7-b4a2-15efaa919862" containerName="nova-cell1-conductor-db-sync" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.755861 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f3e378-99a3-4ef7-b4a2-15efaa919862" containerName="nova-cell1-conductor-db-sync" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.756111 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-log" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.756127 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e120a2-6637-4d03-a245-bca576627607" containerName="nova-metadata-metadata" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.756144 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f3e378-99a3-4ef7-b4a2-15efaa919862" containerName="nova-cell1-conductor-db-sync" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.756175 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe242a81-41af-42a9-8934-34f7d0ef485b" containerName="dnsmasq-dns" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.756191 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04cac76-134f-4232-aaad-c16ec2ef43dc" containerName="nova-manage" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.757061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.776496 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.778920 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.781526 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.781833 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") pod \"29e120a2-6637-4d03-a245-bca576627607\" (UID: \"29e120a2-6637-4d03-a245-bca576627607\") " Nov 24 12:47:21 crc kubenswrapper[4756]: W1124 12:47:21.782000 4756 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/29e120a2-6637-4d03-a245-bca576627607/volumes/kubernetes.io~secret/nova-metadata-tls-certs Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.782066 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "29e120a2-6637-4d03-a245-bca576627607" (UID: "29e120a2-6637-4d03-a245-bca576627607"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.783343 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.783374 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7jtm\" (UniqueName: \"kubernetes.io/projected/29e120a2-6637-4d03-a245-bca576627607-kube-api-access-c7jtm\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.783388 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.783401 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e120a2-6637-4d03-a245-bca576627607-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.783506 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e120a2-6637-4d03-a245-bca576627607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.817842 4756 scope.go:117] "RemoveContainer" containerID="7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.844578 4756 scope.go:117] "RemoveContainer" containerID="75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.845233 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f\": container with ID starting with 75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f not found: ID does not exist" containerID="75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.845276 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f"} err="failed to get container status \"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f\": rpc error: code = NotFound desc = could not find container \"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f\": container with ID starting with 75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f not found: ID does not exist" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.845301 4756 scope.go:117] "RemoveContainer" containerID="7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" Nov 24 12:47:21 crc kubenswrapper[4756]: E1124 12:47:21.845738 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb\": container with ID starting with 7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb not found: ID does not exist" containerID="7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.845770 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb"} err="failed to get container status \"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb\": rpc error: code = NotFound desc = could not find container \"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb\": container with ID starting with 7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb not found: ID does not exist" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.845786 4756 scope.go:117] "RemoveContainer" containerID="75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.846299 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f"} err="failed to get container status \"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f\": rpc error: code = NotFound desc = could not find container \"75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f\": container with ID starting with 75ead91857eee834fcabd65406732c48eae71e45ee36f598ccfb1c4909f6491f not found: ID does not exist" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.846348 4756 scope.go:117] "RemoveContainer" containerID="7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.846712 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb"} err="failed to get container status \"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb\": rpc error: code = NotFound desc = could not find container \"7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb\": container with ID starting with 7539b1f9a9a02c0ca265157925b4e2cd83ed6982030338e6228c410af4b8d8bb not found: ID does not exist" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.884863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtppf\" (UniqueName: \"kubernetes.io/projected/94da8bb7-f5c5-4411-be93-40a15bb4c121-kube-api-access-gtppf\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.884922 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.885052 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.986504 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtppf\" (UniqueName: \"kubernetes.io/projected/94da8bb7-f5c5-4411-be93-40a15bb4c121-kube-api-access-gtppf\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.986575 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.986758 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.990804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:21 crc kubenswrapper[4756]: I1124 12:47:21.990809 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94da8bb7-f5c5-4411-be93-40a15bb4c121-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.010829 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtppf\" (UniqueName: \"kubernetes.io/projected/94da8bb7-f5c5-4411-be93-40a15bb4c121-kube-api-access-gtppf\") pod \"nova-cell1-conductor-0\" (UID: \"94da8bb7-f5c5-4411-be93-40a15bb4c121\") " pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.024024 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.036987 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.046855 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.052667 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.055057 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.055865 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.062712 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.124319 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.190002 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.190070 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.190138 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.190260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9fj\" (UniqueName: \"kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.190363 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.294313 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.294824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.295001 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.295259 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.295487 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9fj\" (UniqueName: \"kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.297139 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.300550 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.300787 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.300796 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.319900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9fj\" (UniqueName: \"kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj\") pod \"nova-metadata-0\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.395455 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.489683 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e120a2-6637-4d03-a245-bca576627607" path="/var/lib/kubelet/pods/29e120a2-6637-4d03-a245-bca576627607/volumes" Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.572700 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.708292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94da8bb7-f5c5-4411-be93-40a15bb4c121","Type":"ContainerStarted","Data":"0e0828bb22d4b0b092a34863372794277c6492207885880135947b346ca0d0d8"} Nov 24 12:47:22 crc kubenswrapper[4756]: W1124 12:47:22.876890 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc18efc1_65e5_4ce5_9514_be7474d3f8bb.slice/crio-8de28ac73810d635636fa35e1601e019bbdd169868542750e1ee7a078295118d WatchSource:0}: Error finding container 8de28ac73810d635636fa35e1601e019bbdd169868542750e1ee7a078295118d: Status 404 returned error can't find the container with id 8de28ac73810d635636fa35e1601e019bbdd169868542750e1ee7a078295118d Nov 24 12:47:22 crc kubenswrapper[4756]: I1124 12:47:22.877286 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:47:23 crc kubenswrapper[4756]: E1124 12:47:23.293917 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:47:23 crc kubenswrapper[4756]: E1124 12:47:23.296795 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:47:23 crc kubenswrapper[4756]: E1124 12:47:23.299264 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:47:23 crc kubenswrapper[4756]: E1124 12:47:23.299326 4756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" containerName="nova-scheduler-scheduler" Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.721510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94da8bb7-f5c5-4411-be93-40a15bb4c121","Type":"ContainerStarted","Data":"9f32f91cdb80138c1c2ab40a08d8b221db50a28cedab2a5f776c9fd97d8c01f3"} Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.723268 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.725248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerStarted","Data":"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e"} Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.725328 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerStarted","Data":"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a"} Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.725347 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerStarted","Data":"8de28ac73810d635636fa35e1601e019bbdd169868542750e1ee7a078295118d"} Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.796700 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.796666266 podStartE2EDuration="2.796666266s" podCreationTimestamp="2025-11-24 12:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:23.749848241 +0000 UTC m=+1176.107362393" watchObservedRunningTime="2025-11-24 12:47:23.796666266 +0000 UTC m=+1176.154180408" Nov 24 12:47:23 crc kubenswrapper[4756]: I1124 12:47:23.799693 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.799675607 podStartE2EDuration="1.799675607s" podCreationTimestamp="2025-11-24 12:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:23.778828644 +0000 UTC m=+1176.136342796" watchObservedRunningTime="2025-11-24 12:47:23.799675607 +0000 UTC m=+1176.157189749" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.488582 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.569991 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppfgp\" (UniqueName: \"kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp\") pod \"913f11ff-7204-4059-a891-1d40b66f2b82\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.570151 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle\") pod \"913f11ff-7204-4059-a891-1d40b66f2b82\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.570568 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data\") pod \"913f11ff-7204-4059-a891-1d40b66f2b82\" (UID: \"913f11ff-7204-4059-a891-1d40b66f2b82\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.577451 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp" (OuterVolumeSpecName: "kube-api-access-ppfgp") pod "913f11ff-7204-4059-a891-1d40b66f2b82" (UID: "913f11ff-7204-4059-a891-1d40b66f2b82"). InnerVolumeSpecName "kube-api-access-ppfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.605299 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data" (OuterVolumeSpecName: "config-data") pod "913f11ff-7204-4059-a891-1d40b66f2b82" (UID: "913f11ff-7204-4059-a891-1d40b66f2b82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.617848 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "913f11ff-7204-4059-a891-1d40b66f2b82" (UID: "913f11ff-7204-4059-a891-1d40b66f2b82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.673965 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.674039 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppfgp\" (UniqueName: \"kubernetes.io/projected/913f11ff-7204-4059-a891-1d40b66f2b82-kube-api-access-ppfgp\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.674186 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913f11ff-7204-4059-a891-1d40b66f2b82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.750569 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerID="25fcc1fccb64acfe6ba157cf9b40b1b8017d1db436d6cfc968b645b43b96f1f8" exitCode=0 Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.750667 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerDied","Data":"25fcc1fccb64acfe6ba157cf9b40b1b8017d1db436d6cfc968b645b43b96f1f8"} Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.752529 4756 generic.go:334] "Generic (PLEG): container finished" podID="913f11ff-7204-4059-a891-1d40b66f2b82" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" exitCode=0 Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.752561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"913f11ff-7204-4059-a891-1d40b66f2b82","Type":"ContainerDied","Data":"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c"} Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.752586 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"913f11ff-7204-4059-a891-1d40b66f2b82","Type":"ContainerDied","Data":"642b39d3179ea0efd6a555f9ff78cbd865874ad1d2aae43ff3da741aa9b0f2d6"} Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.752607 4756 scope.go:117] "RemoveContainer" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.752778 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.787469 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.795699 4756 scope.go:117] "RemoveContainer" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" Nov 24 12:47:25 crc kubenswrapper[4756]: E1124 12:47:25.797465 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c\": container with ID starting with 36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c not found: ID does not exist" containerID="36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.797500 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c"} err="failed to get container status \"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c\": rpc error: code = NotFound desc = could not find container \"36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c\": container with ID starting with 36fc5d535e1228bcc28752a30c82421906bb50aa9416ed68db7d669554ad3f7c not found: ID does not exist" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.816488 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.839623 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.859468 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:25 crc kubenswrapper[4756]: E1124 12:47:25.860024 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" containerName="nova-scheduler-scheduler" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860043 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" containerName="nova-scheduler-scheduler" Nov 24 12:47:25 crc kubenswrapper[4756]: E1124 12:47:25.860135 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-log" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860144 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-log" Nov 24 12:47:25 crc kubenswrapper[4756]: E1124 12:47:25.860170 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-api" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860178 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-api" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860440 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-api" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860461 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" containerName="nova-scheduler-scheduler" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.860474 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" containerName="nova-api-log" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.861193 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.863488 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.876692 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle\") pod \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.876823 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktmvj\" (UniqueName: \"kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj\") pod \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.876867 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs\") pod \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.876920 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data\") pod \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\" (UID: \"dc3f9be2-b906-40ba-85bb-dfb3151eb864\") " Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.880323 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs" (OuterVolumeSpecName: "logs") pod "dc3f9be2-b906-40ba-85bb-dfb3151eb864" (UID: "dc3f9be2-b906-40ba-85bb-dfb3151eb864"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.883410 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj" (OuterVolumeSpecName: "kube-api-access-ktmvj") pod "dc3f9be2-b906-40ba-85bb-dfb3151eb864" (UID: "dc3f9be2-b906-40ba-85bb-dfb3151eb864"). InnerVolumeSpecName "kube-api-access-ktmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.888786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.909427 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data" (OuterVolumeSpecName: "config-data") pod "dc3f9be2-b906-40ba-85bb-dfb3151eb864" (UID: "dc3f9be2-b906-40ba-85bb-dfb3151eb864"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.913792 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3f9be2-b906-40ba-85bb-dfb3151eb864" (UID: "dc3f9be2-b906-40ba-85bb-dfb3151eb864"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979205 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrh7\" (UniqueName: \"kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979610 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979931 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979957 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktmvj\" (UniqueName: \"kubernetes.io/projected/dc3f9be2-b906-40ba-85bb-dfb3151eb864-kube-api-access-ktmvj\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.979970 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3f9be2-b906-40ba-85bb-dfb3151eb864-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:25 crc kubenswrapper[4756]: I1124 12:47:25.980009 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3f9be2-b906-40ba-85bb-dfb3151eb864-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.082350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.082466 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrh7\" (UniqueName: \"kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.082668 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.087351 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.087503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.107900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrh7\" (UniqueName: \"kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7\") pod \"nova-scheduler-0\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.184443 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.502127 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913f11ff-7204-4059-a891-1d40b66f2b82" path="/var/lib/kubelet/pods/913f11ff-7204-4059-a891-1d40b66f2b82/volumes" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.653468 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:47:26 crc kubenswrapper[4756]: W1124 12:47:26.655380 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6324dec_5a51_4c52_be79_9ff505e69807.slice/crio-66c87cd15a731935ad88b6ce9290a4098232a7082ad6f34d4984070257cd840d WatchSource:0}: Error finding container 66c87cd15a731935ad88b6ce9290a4098232a7082ad6f34d4984070257cd840d: Status 404 returned error can't find the container with id 66c87cd15a731935ad88b6ce9290a4098232a7082ad6f34d4984070257cd840d Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.777069 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6324dec-5a51-4c52-be79-9ff505e69807","Type":"ContainerStarted","Data":"66c87cd15a731935ad88b6ce9290a4098232a7082ad6f34d4984070257cd840d"} Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.780398 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc3f9be2-b906-40ba-85bb-dfb3151eb864","Type":"ContainerDied","Data":"9b944fe19b7f17bbfd8aaf9ad9e3eebcece6e5c0b173c466d337fc52ec04b779"} Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.780478 4756 scope.go:117] "RemoveContainer" containerID="25fcc1fccb64acfe6ba157cf9b40b1b8017d1db436d6cfc968b645b43b96f1f8" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.780694 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.819173 4756 scope.go:117] "RemoveContainer" containerID="249c4463f63fbc1d73e4608fc6f70ac8ecb1e248650ec203e64463d499e82881" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.826916 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.859903 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.870390 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.872286 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.875684 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:47:26 crc kubenswrapper[4756]: I1124 12:47:26.879406 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.005747 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.005822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257ck\" (UniqueName: \"kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.005866 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.005923 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.107698 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.108094 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.108467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.108656 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257ck\" (UniqueName: \"kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.108711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.115283 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.115630 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.130644 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257ck\" (UniqueName: \"kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck\") pod \"nova-api-0\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.161261 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.190296 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.396320 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.397358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.681847 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.791878 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6324dec-5a51-4c52-be79-9ff505e69807","Type":"ContainerStarted","Data":"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394"} Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.794762 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerStarted","Data":"f7ee8ab47a14dfc8e3a80126d0ceedae33d12913c69f0a9e9a55d09f55691062"} Nov 24 12:47:27 crc kubenswrapper[4756]: I1124 12:47:27.819796 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.819768881 podStartE2EDuration="2.819768881s" podCreationTimestamp="2025-11-24 12:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:27.814299763 +0000 UTC m=+1180.171814125" watchObservedRunningTime="2025-11-24 12:47:27.819768881 +0000 UTC m=+1180.177283023" Nov 24 12:47:28 crc kubenswrapper[4756]: I1124 12:47:28.488570 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3f9be2-b906-40ba-85bb-dfb3151eb864" path="/var/lib/kubelet/pods/dc3f9be2-b906-40ba-85bb-dfb3151eb864/volumes" Nov 24 12:47:28 crc kubenswrapper[4756]: I1124 12:47:28.817113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerStarted","Data":"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122"} Nov 24 12:47:28 crc kubenswrapper[4756]: I1124 12:47:28.818464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerStarted","Data":"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a"} Nov 24 12:47:28 crc kubenswrapper[4756]: I1124 12:47:28.845753 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.845731407 podStartE2EDuration="2.845731407s" podCreationTimestamp="2025-11-24 12:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:28.838634865 +0000 UTC m=+1181.196149017" watchObservedRunningTime="2025-11-24 12:47:28.845731407 +0000 UTC m=+1181.203245559" Nov 24 12:47:30 crc kubenswrapper[4756]: I1124 12:47:30.624085 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:47:31 crc kubenswrapper[4756]: I1124 12:47:31.185461 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:47:32 crc kubenswrapper[4756]: I1124 12:47:32.396218 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:47:32 crc kubenswrapper[4756]: I1124 12:47:32.396609 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:47:33 crc kubenswrapper[4756]: I1124 12:47:33.409395 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:33 crc kubenswrapper[4756]: I1124 12:47:33.409440 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:33 crc kubenswrapper[4756]: I1124 12:47:33.479114 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:47:33 crc kubenswrapper[4756]: I1124 12:47:33.479188 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:47:34 crc kubenswrapper[4756]: I1124 12:47:34.787234 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:34 crc kubenswrapper[4756]: I1124 12:47:34.787812 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b9447314-6235-4140-879b-cc20306cc7e1" containerName="kube-state-metrics" containerID="cri-o://e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c" gracePeriod=30 Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.318605 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.412908 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tdp\" (UniqueName: \"kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp\") pod \"b9447314-6235-4140-879b-cc20306cc7e1\" (UID: \"b9447314-6235-4140-879b-cc20306cc7e1\") " Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.421196 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp" (OuterVolumeSpecName: "kube-api-access-v9tdp") pod "b9447314-6235-4140-879b-cc20306cc7e1" (UID: "b9447314-6235-4140-879b-cc20306cc7e1"). InnerVolumeSpecName "kube-api-access-v9tdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.516109 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9tdp\" (UniqueName: \"kubernetes.io/projected/b9447314-6235-4140-879b-cc20306cc7e1-kube-api-access-v9tdp\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.892777 4756 generic.go:334] "Generic (PLEG): container finished" podID="b9447314-6235-4140-879b-cc20306cc7e1" containerID="e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c" exitCode=2 Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.892823 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9447314-6235-4140-879b-cc20306cc7e1","Type":"ContainerDied","Data":"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c"} Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.892852 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9447314-6235-4140-879b-cc20306cc7e1","Type":"ContainerDied","Data":"1543bd20f0d1b4e0a124b60983fdbb5a9efeceb74fc53521d746d2ca6b05599a"} Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.892869 4756 scope.go:117] "RemoveContainer" containerID="e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.892986 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.935248 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.937971 4756 scope.go:117] "RemoveContainer" containerID="e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c" Nov 24 12:47:35 crc kubenswrapper[4756]: E1124 12:47:35.938655 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c\": container with ID starting with e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c not found: ID does not exist" containerID="e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.938692 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c"} err="failed to get container status \"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c\": rpc error: code = NotFound desc = could not find container \"e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c\": container with ID starting with e0243d92bbfa07a5427248d51533967e7f32270888b8ad7957d0e5367e632e5c not found: ID does not exist" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.944610 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.974348 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:35 crc kubenswrapper[4756]: E1124 12:47:35.975830 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9447314-6235-4140-879b-cc20306cc7e1" containerName="kube-state-metrics" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.975858 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9447314-6235-4140-879b-cc20306cc7e1" containerName="kube-state-metrics" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.976500 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9447314-6235-4140-879b-cc20306cc7e1" containerName="kube-state-metrics" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.978711 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.983187 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 12:47:35 crc kubenswrapper[4756]: I1124 12:47:35.983943 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.009839 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.134393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjs5\" (UniqueName: \"kubernetes.io/projected/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-api-access-npjs5\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.134480 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.134515 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.134621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.189032 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.224579 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.237204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.237347 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjs5\" (UniqueName: \"kubernetes.io/projected/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-api-access-npjs5\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.237418 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.237458 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.246127 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.251025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.252754 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.263937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjs5\" (UniqueName: \"kubernetes.io/projected/d6fdbd64-1ed5-48d3-a245-a13416afe4d9-kube-api-access-npjs5\") pod \"kube-state-metrics-0\" (UID: \"d6fdbd64-1ed5-48d3-a245-a13416afe4d9\") " pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.318581 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.494899 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9447314-6235-4140-879b-cc20306cc7e1" path="/var/lib/kubelet/pods/b9447314-6235-4140-879b-cc20306cc7e1/volumes" Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.839288 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.840418 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="sg-core" containerID="cri-o://285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943" gracePeriod=30 Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.840777 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="proxy-httpd" containerID="cri-o://5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a" gracePeriod=30 Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.840872 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-notification-agent" containerID="cri-o://60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34" gracePeriod=30 Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.841182 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-central-agent" containerID="cri-o://f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910" gracePeriod=30 Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.853068 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 12:47:36 crc kubenswrapper[4756]: W1124 12:47:36.861702 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6fdbd64_1ed5_48d3_a245_a13416afe4d9.slice/crio-54bd1926e6f375a226c14eb6ae18802fcd83e8eca180d42606cd9eaddcd38a29 WatchSource:0}: Error finding container 54bd1926e6f375a226c14eb6ae18802fcd83e8eca180d42606cd9eaddcd38a29: Status 404 returned error can't find the container with id 54bd1926e6f375a226c14eb6ae18802fcd83e8eca180d42606cd9eaddcd38a29 Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.905217 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d6fdbd64-1ed5-48d3-a245-a13416afe4d9","Type":"ContainerStarted","Data":"54bd1926e6f375a226c14eb6ae18802fcd83e8eca180d42606cd9eaddcd38a29"} Nov 24 12:47:36 crc kubenswrapper[4756]: I1124 12:47:36.945879 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.190950 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.191395 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.921227 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d6fdbd64-1ed5-48d3-a245-a13416afe4d9","Type":"ContainerStarted","Data":"189fcb6e7b66adb1b44ff4ec04df8e95c56d6ffb33e2c2cc2f4991b93b84bb93"} Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.921719 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.925865 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerID="5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a" exitCode=0 Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.925905 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerID="285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943" exitCode=2 Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.925920 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerID="f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910" exitCode=0 Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.925992 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerDied","Data":"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a"} Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.926060 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerDied","Data":"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943"} Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.926074 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerDied","Data":"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910"} Nov 24 12:47:37 crc kubenswrapper[4756]: I1124 12:47:37.948987 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.322899234 podStartE2EDuration="2.948961293s" podCreationTimestamp="2025-11-24 12:47:35 +0000 UTC" firstStartedPulling="2025-11-24 12:47:36.864998999 +0000 UTC m=+1189.222513141" lastFinishedPulling="2025-11-24 12:47:37.491061068 +0000 UTC m=+1189.848575200" observedRunningTime="2025-11-24 12:47:37.939712563 +0000 UTC m=+1190.297226705" watchObservedRunningTime="2025-11-24 12:47:37.948961293 +0000 UTC m=+1190.306475435" Nov 24 12:47:38 crc kubenswrapper[4756]: I1124 12:47:38.272486 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:38 crc kubenswrapper[4756]: I1124 12:47:38.272527 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.890179 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.963644 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.963769 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.963798 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9snf4\" (UniqueName: \"kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.963876 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.963906 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.964035 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.964188 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml\") pod \"f9590efb-66f1-496f-884f-9685c2a3af1b\" (UID: \"f9590efb-66f1-496f-884f-9685c2a3af1b\") " Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.964527 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.964750 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.965013 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.968085 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerID="60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34" exitCode=0 Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.968193 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerDied","Data":"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34"} Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.968573 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9590efb-66f1-496f-884f-9685c2a3af1b","Type":"ContainerDied","Data":"6c4b511064e8f3ccb3518e83e20d35801bc9fb2aadb5d6e1264ba98d8e05d302"} Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.968606 4756 scope.go:117] "RemoveContainer" containerID="5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.968240 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.979249 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts" (OuterVolumeSpecName: "scripts") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.979402 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4" (OuterVolumeSpecName: "kube-api-access-9snf4") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "kube-api-access-9snf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:41 crc kubenswrapper[4756]: I1124 12:47:41.997952 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.055684 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.068089 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.068137 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.068174 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.068195 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9snf4\" (UniqueName: \"kubernetes.io/projected/f9590efb-66f1-496f-884f-9685c2a3af1b-kube-api-access-9snf4\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.068215 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9590efb-66f1-496f-884f-9685c2a3af1b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.071262 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data" (OuterVolumeSpecName: "config-data") pod "f9590efb-66f1-496f-884f-9685c2a3af1b" (UID: "f9590efb-66f1-496f-884f-9685c2a3af1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.127817 4756 scope.go:117] "RemoveContainer" containerID="285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.145068 4756 scope.go:117] "RemoveContainer" containerID="60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.170825 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9590efb-66f1-496f-884f-9685c2a3af1b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.177766 4756 scope.go:117] "RemoveContainer" containerID="f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.204257 4756 scope.go:117] "RemoveContainer" containerID="5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.205407 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a\": container with ID starting with 5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a not found: ID does not exist" containerID="5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.205469 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a"} err="failed to get container status \"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a\": rpc error: code = NotFound desc = could not find container \"5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a\": container with ID starting with 5b3a30670a50b95221ed78b73410991adde01d1a87f6be34599c3653d1eaea0a not found: ID does not exist" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.205510 4756 scope.go:117] "RemoveContainer" containerID="285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.205942 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943\": container with ID starting with 285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943 not found: ID does not exist" containerID="285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.205981 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943"} err="failed to get container status \"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943\": rpc error: code = NotFound desc = could not find container \"285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943\": container with ID starting with 285edb0c0f3164b9bd173f71088a8ef8ffbfbc1cfd45e563f5a1b3d38070b943 not found: ID does not exist" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.206001 4756 scope.go:117] "RemoveContainer" containerID="60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.206261 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34\": container with ID starting with 60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34 not found: ID does not exist" containerID="60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.206290 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34"} err="failed to get container status \"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34\": rpc error: code = NotFound desc = could not find container \"60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34\": container with ID starting with 60842c9eb7b7a459f459ec680bf5a1437e46737299eb1bd8eca40fee6ad58a34 not found: ID does not exist" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.206308 4756 scope.go:117] "RemoveContainer" containerID="f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.206581 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910\": container with ID starting with f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910 not found: ID does not exist" containerID="f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.206610 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910"} err="failed to get container status \"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910\": rpc error: code = NotFound desc = could not find container \"f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910\": container with ID starting with f8c8e856370852cbe3bcdeddbad325dd97eae2470c5e700766ffdba0d5884910 not found: ID does not exist" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.311045 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.323957 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.332920 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.333627 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-central-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333647 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-central-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.333673 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="sg-core" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333683 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="sg-core" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.333710 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-notification-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333716 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-notification-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: E1124 12:47:42.333726 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="proxy-httpd" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333733 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="proxy-httpd" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333912 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="proxy-httpd" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333924 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="sg-core" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333948 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-notification-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.333962 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" containerName="ceilometer-central-agent" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.335748 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.338143 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.342204 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.342543 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.346776 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.401265 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.404415 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.409567 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.484501 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.484654 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78b4l\" (UniqueName: \"kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485070 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485209 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485478 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485526 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485549 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.485615 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.492754 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9590efb-66f1-496f-884f-9685c2a3af1b" path="/var/lib/kubelet/pods/f9590efb-66f1-496f-884f-9685c2a3af1b/volumes" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.587924 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588305 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588415 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588567 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78b4l\" (UniqueName: \"kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588679 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588778 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588849 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.588931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.590133 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.590221 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.592854 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.592906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.593839 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.594575 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.596306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.608362 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78b4l\" (UniqueName: \"kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l\") pod \"ceilometer-0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.662932 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:42 crc kubenswrapper[4756]: I1124 12:47:42.985711 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:47:43 crc kubenswrapper[4756]: W1124 12:47:43.127381 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c6f4312_f43f_428f_9b72_476317c008c0.slice/crio-78b9e26c276e6be50807f4b658e4769940a398b254b23031a27d42ee1e308f7a WatchSource:0}: Error finding container 78b9e26c276e6be50807f4b658e4769940a398b254b23031a27d42ee1e308f7a: Status 404 returned error can't find the container with id 78b9e26c276e6be50807f4b658e4769940a398b254b23031a27d42ee1e308f7a Nov 24 12:47:43 crc kubenswrapper[4756]: I1124 12:47:43.130125 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:43 crc kubenswrapper[4756]: I1124 12:47:43.990201 4756 generic.go:334] "Generic (PLEG): container finished" podID="f847c6f9-2c3a-4846-bc94-09a7685f3387" containerID="821c6ed53a1b520b2a12c1a84cfb2acf1f91b64b82600b6264e0af2a98b4fbbd" exitCode=137 Nov 24 12:47:43 crc kubenswrapper[4756]: I1124 12:47:43.990294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f847c6f9-2c3a-4846-bc94-09a7685f3387","Type":"ContainerDied","Data":"821c6ed53a1b520b2a12c1a84cfb2acf1f91b64b82600b6264e0af2a98b4fbbd"} Nov 24 12:47:43 crc kubenswrapper[4756]: I1124 12:47:43.991877 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerStarted","Data":"78b9e26c276e6be50807f4b658e4769940a398b254b23031a27d42ee1e308f7a"} Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.351234 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.385596 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgb78\" (UniqueName: \"kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78\") pod \"f847c6f9-2c3a-4846-bc94-09a7685f3387\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.385724 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle\") pod \"f847c6f9-2c3a-4846-bc94-09a7685f3387\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.385779 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data\") pod \"f847c6f9-2c3a-4846-bc94-09a7685f3387\" (UID: \"f847c6f9-2c3a-4846-bc94-09a7685f3387\") " Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.392192 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78" (OuterVolumeSpecName: "kube-api-access-wgb78") pod "f847c6f9-2c3a-4846-bc94-09a7685f3387" (UID: "f847c6f9-2c3a-4846-bc94-09a7685f3387"). InnerVolumeSpecName "kube-api-access-wgb78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.392594 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgb78\" (UniqueName: \"kubernetes.io/projected/f847c6f9-2c3a-4846-bc94-09a7685f3387-kube-api-access-wgb78\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.417683 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data" (OuterVolumeSpecName: "config-data") pod "f847c6f9-2c3a-4846-bc94-09a7685f3387" (UID: "f847c6f9-2c3a-4846-bc94-09a7685f3387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.423612 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f847c6f9-2c3a-4846-bc94-09a7685f3387" (UID: "f847c6f9-2c3a-4846-bc94-09a7685f3387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.496190 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:44 crc kubenswrapper[4756]: I1124 12:47:44.496230 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f847c6f9-2c3a-4846-bc94-09a7685f3387-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.004048 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerStarted","Data":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.004405 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerStarted","Data":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.005933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f847c6f9-2c3a-4846-bc94-09a7685f3387","Type":"ContainerDied","Data":"672bcf61ba425a7a5579b7eacba3467dc08278568fd4fef4e07d7fea55104b0a"} Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.005962 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.005978 4756 scope.go:117] "RemoveContainer" containerID="821c6ed53a1b520b2a12c1a84cfb2acf1f91b64b82600b6264e0af2a98b4fbbd" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.031089 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.044993 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.056056 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:45 crc kubenswrapper[4756]: E1124 12:47:45.056481 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f847c6f9-2c3a-4846-bc94-09a7685f3387" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.056493 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f847c6f9-2c3a-4846-bc94-09a7685f3387" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.056689 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f847c6f9-2c3a-4846-bc94-09a7685f3387" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.057395 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.060433 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.062916 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.064911 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.093611 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.105116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.105222 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.105256 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwqpl\" (UniqueName: \"kubernetes.io/projected/54bdc4b7-e42a-49b9-b81e-d817f3c08555-kube-api-access-fwqpl\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.105428 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.105455 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.207655 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.208026 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwqpl\" (UniqueName: \"kubernetes.io/projected/54bdc4b7-e42a-49b9-b81e-d817f3c08555-kube-api-access-fwqpl\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.208240 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.208280 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.208318 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.212777 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.213045 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.216021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.228096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54bdc4b7-e42a-49b9-b81e-d817f3c08555-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.243777 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwqpl\" (UniqueName: \"kubernetes.io/projected/54bdc4b7-e42a-49b9-b81e-d817f3c08555-kube-api-access-fwqpl\") pod \"nova-cell1-novncproxy-0\" (UID: \"54bdc4b7-e42a-49b9-b81e-d817f3c08555\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.375258 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:45 crc kubenswrapper[4756]: I1124 12:47:45.870302 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 12:47:45 crc kubenswrapper[4756]: W1124 12:47:45.876792 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54bdc4b7_e42a_49b9_b81e_d817f3c08555.slice/crio-ca0c0d19bd2b7a8b565d6eed9e55f830a019da404bb0444faa42ce27d2ecc112 WatchSource:0}: Error finding container ca0c0d19bd2b7a8b565d6eed9e55f830a019da404bb0444faa42ce27d2ecc112: Status 404 returned error can't find the container with id ca0c0d19bd2b7a8b565d6eed9e55f830a019da404bb0444faa42ce27d2ecc112 Nov 24 12:47:46 crc kubenswrapper[4756]: I1124 12:47:46.017025 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"54bdc4b7-e42a-49b9-b81e-d817f3c08555","Type":"ContainerStarted","Data":"ca0c0d19bd2b7a8b565d6eed9e55f830a019da404bb0444faa42ce27d2ecc112"} Nov 24 12:47:46 crc kubenswrapper[4756]: I1124 12:47:46.021513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerStarted","Data":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} Nov 24 12:47:46 crc kubenswrapper[4756]: I1124 12:47:46.334687 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 12:47:46 crc kubenswrapper[4756]: I1124 12:47:46.493755 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f847c6f9-2c3a-4846-bc94-09a7685f3387" path="/var/lib/kubelet/pods/f847c6f9-2c3a-4846-bc94-09a7685f3387/volumes" Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.267118 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.270055 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.271498 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.279863 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.285310 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"54bdc4b7-e42a-49b9-b81e-d817f3c08555","Type":"ContainerStarted","Data":"baf9d9a3e1ef711b37d0b68c99cfb14c7c1503a04d51c0276a1e1ed172460ea6"} Nov 24 12:47:47 crc kubenswrapper[4756]: I1124 12:47:47.315137 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.315113654 podStartE2EDuration="2.315113654s" podCreationTimestamp="2025-11-24 12:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:47.305036251 +0000 UTC m=+1199.662550393" watchObservedRunningTime="2025-11-24 12:47:47.315113654 +0000 UTC m=+1199.672627796" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.297113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerStarted","Data":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.298688 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.302750 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.326505 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.15386649 podStartE2EDuration="6.326485286s" podCreationTimestamp="2025-11-24 12:47:42 +0000 UTC" firstStartedPulling="2025-11-24 12:47:43.130056942 +0000 UTC m=+1195.487571084" lastFinishedPulling="2025-11-24 12:47:47.302675738 +0000 UTC m=+1199.660189880" observedRunningTime="2025-11-24 12:47:48.321550833 +0000 UTC m=+1200.679064985" watchObservedRunningTime="2025-11-24 12:47:48.326485286 +0000 UTC m=+1200.683999428" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.661523 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.668413 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.697010 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808090 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808469 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27g2x\" (UniqueName: \"kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808540 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.808631 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910462 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910534 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910629 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910656 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910686 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27g2x\" (UniqueName: \"kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.910721 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.911690 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.911825 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.912404 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.912660 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.913013 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:48 crc kubenswrapper[4756]: I1124 12:47:48.939623 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27g2x\" (UniqueName: \"kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x\") pod \"dnsmasq-dns-cd5cbd7b9-6npz8\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:49 crc kubenswrapper[4756]: I1124 12:47:49.076768 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:49 crc kubenswrapper[4756]: I1124 12:47:49.329184 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:47:49 crc kubenswrapper[4756]: I1124 12:47:49.642289 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:47:50 crc kubenswrapper[4756]: I1124 12:47:50.346954 4756 generic.go:334] "Generic (PLEG): container finished" podID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerID="0d26ab712bb61a3a4f5a9969eba1cf61ae537fe98ea5b2aa65087d9c8af1e9f3" exitCode=0 Nov 24 12:47:50 crc kubenswrapper[4756]: I1124 12:47:50.348382 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" event={"ID":"05c4f949-e288-4f9b-91d4-1468f79ad265","Type":"ContainerDied","Data":"0d26ab712bb61a3a4f5a9969eba1cf61ae537fe98ea5b2aa65087d9c8af1e9f3"} Nov 24 12:47:50 crc kubenswrapper[4756]: I1124 12:47:50.348463 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" event={"ID":"05c4f949-e288-4f9b-91d4-1468f79ad265","Type":"ContainerStarted","Data":"b4955c28c6fc30dc01d776c82f26aac766a412fed1679ffb8f4b38026114f509"} Nov 24 12:47:50 crc kubenswrapper[4756]: I1124 12:47:50.375397 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:50 crc kubenswrapper[4756]: I1124 12:47:50.906645 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.192815 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.362637 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" event={"ID":"05c4f949-e288-4f9b-91d4-1468f79ad265","Type":"ContainerStarted","Data":"00b7cd347a5bb3d5f47c515fe6e551b5232e897015fc1a0a039991d82c6cc01a"} Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363001 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-log" containerID="cri-o://8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363059 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-api" containerID="cri-o://610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363251 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="sg-core" containerID="cri-o://5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363227 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-central-agent" containerID="cri-o://9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363284 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="proxy-httpd" containerID="cri-o://830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.363342 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-notification-agent" containerID="cri-o://a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" gracePeriod=30 Nov 24 12:47:51 crc kubenswrapper[4756]: I1124 12:47:51.416889 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" podStartSLOduration=3.416859844 podStartE2EDuration="3.416859844s" podCreationTimestamp="2025-11-24 12:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:51.403978546 +0000 UTC m=+1203.761492688" watchObservedRunningTime="2025-11-24 12:47:51.416859844 +0000 UTC m=+1203.774373986" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.271348 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319061 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319202 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319277 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319325 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319364 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78b4l\" (UniqueName: \"kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319634 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml\") pod \"2c6f4312-f43f-428f-9b72-476317c008c0\" (UID: \"2c6f4312-f43f-428f-9b72-476317c008c0\") " Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.319950 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.320286 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.320656 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.337721 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts" (OuterVolumeSpecName: "scripts") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.346374 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l" (OuterVolumeSpecName: "kube-api-access-78b4l") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "kube-api-access-78b4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392790 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c6f4312-f43f-428f-9b72-476317c008c0" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" exitCode=0 Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392830 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c6f4312-f43f-428f-9b72-476317c008c0" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" exitCode=2 Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392842 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c6f4312-f43f-428f-9b72-476317c008c0" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" exitCode=0 Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392850 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c6f4312-f43f-428f-9b72-476317c008c0" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" exitCode=0 Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392895 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerDied","Data":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392932 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerDied","Data":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392947 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerDied","Data":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392957 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerDied","Data":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392968 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c6f4312-f43f-428f-9b72-476317c008c0","Type":"ContainerDied","Data":"78b9e26c276e6be50807f4b658e4769940a398b254b23031a27d42ee1e308f7a"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.392984 4756 scope.go:117] "RemoveContainer" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.393135 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.401587 4756 generic.go:334] "Generic (PLEG): container finished" podID="181a0a4c-0297-438f-a928-1127f0b93627" containerID="8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a" exitCode=143 Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.401627 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerDied","Data":"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a"} Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.401914 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.403773 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.422276 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.423073 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c6f4312-f43f-428f-9b72-476317c008c0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.423567 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78b4l\" (UniqueName: \"kubernetes.io/projected/2c6f4312-f43f-428f-9b72-476317c008c0-kube-api-access-78b4l\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.423596 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.428522 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.430316 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.436197 4756 scope.go:117] "RemoveContainer" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.473881 4756 scope.go:117] "RemoveContainer" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.494394 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data" (OuterVolumeSpecName: "config-data") pod "2c6f4312-f43f-428f-9b72-476317c008c0" (UID: "2c6f4312-f43f-428f-9b72-476317c008c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.526101 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.526140 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.526168 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6f4312-f43f-428f-9b72-476317c008c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.634012 4756 scope.go:117] "RemoveContainer" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.656589 4756 scope.go:117] "RemoveContainer" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.657282 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": container with ID starting with 830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8 not found: ID does not exist" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.657392 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} err="failed to get container status \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": rpc error: code = NotFound desc = could not find container \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": container with ID starting with 830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.657525 4756 scope.go:117] "RemoveContainer" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.658081 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": container with ID starting with 5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249 not found: ID does not exist" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.658189 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} err="failed to get container status \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": rpc error: code = NotFound desc = could not find container \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": container with ID starting with 5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.658284 4756 scope.go:117] "RemoveContainer" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.658750 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": container with ID starting with a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334 not found: ID does not exist" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.658840 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} err="failed to get container status \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": rpc error: code = NotFound desc = could not find container \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": container with ID starting with a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.658960 4756 scope.go:117] "RemoveContainer" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.659466 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": container with ID starting with 9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311 not found: ID does not exist" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.659558 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} err="failed to get container status \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": rpc error: code = NotFound desc = could not find container \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": container with ID starting with 9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.659638 4756 scope.go:117] "RemoveContainer" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.660074 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} err="failed to get container status \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": rpc error: code = NotFound desc = could not find container \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": container with ID starting with 830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.660125 4756 scope.go:117] "RemoveContainer" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.660539 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} err="failed to get container status \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": rpc error: code = NotFound desc = could not find container \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": container with ID starting with 5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.660637 4756 scope.go:117] "RemoveContainer" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.661064 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} err="failed to get container status \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": rpc error: code = NotFound desc = could not find container \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": container with ID starting with a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.661186 4756 scope.go:117] "RemoveContainer" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.661754 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} err="failed to get container status \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": rpc error: code = NotFound desc = could not find container \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": container with ID starting with 9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.661815 4756 scope.go:117] "RemoveContainer" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662180 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} err="failed to get container status \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": rpc error: code = NotFound desc = could not find container \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": container with ID starting with 830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662204 4756 scope.go:117] "RemoveContainer" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662558 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} err="failed to get container status \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": rpc error: code = NotFound desc = could not find container \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": container with ID starting with 5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662586 4756 scope.go:117] "RemoveContainer" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662835 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} err="failed to get container status \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": rpc error: code = NotFound desc = could not find container \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": container with ID starting with a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.662936 4756 scope.go:117] "RemoveContainer" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663330 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} err="failed to get container status \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": rpc error: code = NotFound desc = could not find container \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": container with ID starting with 9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663353 4756 scope.go:117] "RemoveContainer" containerID="830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663598 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8"} err="failed to get container status \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": rpc error: code = NotFound desc = could not find container \"830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8\": container with ID starting with 830587df5635d04eff355303249b4411643e751dcae3b2bce3af80ca9ec4e9d8 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663618 4756 scope.go:117] "RemoveContainer" containerID="5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663855 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249"} err="failed to get container status \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": rpc error: code = NotFound desc = could not find container \"5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249\": container with ID starting with 5d796c1be79cded770174e55adcae94b21e17bee6c28ef1eb17322f40161b249 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.663875 4756 scope.go:117] "RemoveContainer" containerID="a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.664229 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334"} err="failed to get container status \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": rpc error: code = NotFound desc = could not find container \"a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334\": container with ID starting with a878ff97f201677a14e64d783a809dd153831475b8f54436bbdfe456f23b9334 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.664294 4756 scope.go:117] "RemoveContainer" containerID="9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.664611 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311"} err="failed to get container status \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": rpc error: code = NotFound desc = could not find container \"9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311\": container with ID starting with 9a70c5f62f99be698753c6334549efaa2a054e1116af54da0d76e5477db07311 not found: ID does not exist" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.730205 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.746194 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.762306 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.762798 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-notification-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.762817 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-notification-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.762833 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="proxy-httpd" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.762841 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="proxy-httpd" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.762863 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="sg-core" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.762878 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="sg-core" Nov 24 12:47:52 crc kubenswrapper[4756]: E1124 12:47:52.762892 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-central-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.762898 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-central-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.763132 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-notification-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.763150 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="proxy-httpd" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.763184 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="sg-core" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.763200 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" containerName="ceilometer-central-agent" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.765637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.767797 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.770705 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.770938 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.779819 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.830840 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pkf\" (UniqueName: \"kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.830921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831028 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831054 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831105 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831137 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.831246 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932489 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pkf\" (UniqueName: \"kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932584 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932626 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932709 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.932785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.933936 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.934125 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.940314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.940498 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.941314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.941887 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.959503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:52 crc kubenswrapper[4756]: I1124 12:47:52.972641 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pkf\" (UniqueName: \"kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf\") pod \"ceilometer-0\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " pod="openstack/ceilometer-0" Nov 24 12:47:53 crc kubenswrapper[4756]: I1124 12:47:53.085537 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:47:53 crc kubenswrapper[4756]: I1124 12:47:53.116626 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:53 crc kubenswrapper[4756]: I1124 12:47:53.589510 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:47:53 crc kubenswrapper[4756]: W1124 12:47:53.595732 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod400bd5ca_e7cd_4be6_9a34_2c90c3c05839.slice/crio-aa9496b24435dd883d77b644440853ea3b9737d3e15bb7bf641e9d8dcadb6c70 WatchSource:0}: Error finding container aa9496b24435dd883d77b644440853ea3b9737d3e15bb7bf641e9d8dcadb6c70: Status 404 returned error can't find the container with id aa9496b24435dd883d77b644440853ea3b9737d3e15bb7bf641e9d8dcadb6c70 Nov 24 12:47:54 crc kubenswrapper[4756]: I1124 12:47:54.422463 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerStarted","Data":"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4"} Nov 24 12:47:54 crc kubenswrapper[4756]: I1124 12:47:54.422896 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerStarted","Data":"aa9496b24435dd883d77b644440853ea3b9737d3e15bb7bf641e9d8dcadb6c70"} Nov 24 12:47:54 crc kubenswrapper[4756]: I1124 12:47:54.487212 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6f4312-f43f-428f-9b72-476317c008c0" path="/var/lib/kubelet/pods/2c6f4312-f43f-428f-9b72-476317c008c0/volumes" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.030207 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.186908 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle\") pod \"181a0a4c-0297-438f-a928-1127f0b93627\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.187445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs\") pod \"181a0a4c-0297-438f-a928-1127f0b93627\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.187614 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data\") pod \"181a0a4c-0297-438f-a928-1127f0b93627\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.187689 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257ck\" (UniqueName: \"kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck\") pod \"181a0a4c-0297-438f-a928-1127f0b93627\" (UID: \"181a0a4c-0297-438f-a928-1127f0b93627\") " Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.188145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs" (OuterVolumeSpecName: "logs") pod "181a0a4c-0297-438f-a928-1127f0b93627" (UID: "181a0a4c-0297-438f-a928-1127f0b93627"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.188658 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/181a0a4c-0297-438f-a928-1127f0b93627-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.199376 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck" (OuterVolumeSpecName: "kube-api-access-257ck") pod "181a0a4c-0297-438f-a928-1127f0b93627" (UID: "181a0a4c-0297-438f-a928-1127f0b93627"). InnerVolumeSpecName "kube-api-access-257ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.234434 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data" (OuterVolumeSpecName: "config-data") pod "181a0a4c-0297-438f-a928-1127f0b93627" (UID: "181a0a4c-0297-438f-a928-1127f0b93627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.245359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "181a0a4c-0297-438f-a928-1127f0b93627" (UID: "181a0a4c-0297-438f-a928-1127f0b93627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.291290 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.291339 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257ck\" (UniqueName: \"kubernetes.io/projected/181a0a4c-0297-438f-a928-1127f0b93627-kube-api-access-257ck\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.291356 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/181a0a4c-0297-438f-a928-1127f0b93627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.376127 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.393376 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.434527 4756 generic.go:334] "Generic (PLEG): container finished" podID="181a0a4c-0297-438f-a928-1127f0b93627" containerID="610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122" exitCode=0 Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.434613 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerDied","Data":"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122"} Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.434846 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"181a0a4c-0297-438f-a928-1127f0b93627","Type":"ContainerDied","Data":"f7ee8ab47a14dfc8e3a80126d0ceedae33d12913c69f0a9e9a55d09f55691062"} Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.434640 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.434867 4756 scope.go:117] "RemoveContainer" containerID="610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.437663 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerStarted","Data":"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de"} Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.459786 4756 scope.go:117] "RemoveContainer" containerID="8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.461630 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.485482 4756 scope.go:117] "RemoveContainer" containerID="610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122" Nov 24 12:47:55 crc kubenswrapper[4756]: E1124 12:47:55.485999 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122\": container with ID starting with 610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122 not found: ID does not exist" containerID="610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.486051 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122"} err="failed to get container status \"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122\": rpc error: code = NotFound desc = could not find container \"610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122\": container with ID starting with 610cebaf158bf513b671c8ac2a592d6f4b3eeccaa14c4d12296ab533830c8122 not found: ID does not exist" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.486088 4756 scope.go:117] "RemoveContainer" containerID="8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a" Nov 24 12:47:55 crc kubenswrapper[4756]: E1124 12:47:55.486593 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a\": container with ID starting with 8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a not found: ID does not exist" containerID="8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.486647 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a"} err="failed to get container status \"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a\": rpc error: code = NotFound desc = could not find container \"8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a\": container with ID starting with 8a9f79b482b7ccc751b59be36d167681e30a6a8512ca9e7fdc23e53b14f6172a not found: ID does not exist" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.502278 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.513610 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.530717 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:55 crc kubenswrapper[4756]: E1124 12:47:55.531262 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-log" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.531288 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-log" Nov 24 12:47:55 crc kubenswrapper[4756]: E1124 12:47:55.531326 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-api" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.531335 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-api" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.531552 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-log" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.531577 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="181a0a4c-0297-438f-a928-1127f0b93627" containerName="nova-api-api" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.532970 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.535689 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.535944 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.536237 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.572180 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.691824 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tw58s"] Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.693583 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699132 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699354 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699455 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699504 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.699558 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9m66\" (UniqueName: \"kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.700489 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.708938 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tw58s"] Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802039 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802512 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802544 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9m66\" (UniqueName: \"kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802716 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802742 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4qt\" (UniqueName: \"kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.802759 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.803047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.803109 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.803141 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.803543 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.808944 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.809580 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.809831 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.819715 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.823328 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9m66\" (UniqueName: \"kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66\") pod \"nova-api-0\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.871658 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.905518 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.905642 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4qt\" (UniqueName: \"kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.905673 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.905777 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.911021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.911021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.911770 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:55 crc kubenswrapper[4756]: I1124 12:47:55.930705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4qt\" (UniqueName: \"kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt\") pod \"nova-cell1-cell-mapping-tw58s\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.200655 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.423673 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.470463 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerStarted","Data":"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f"} Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.492920 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181a0a4c-0297-438f-a928-1127f0b93627" path="/var/lib/kubelet/pods/181a0a4c-0297-438f-a928-1127f0b93627/volumes" Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.493941 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerStarted","Data":"8da5e44339a7ed97f10ac41eccc6b4853b0c2fa25d5584fdcff9ce5a52d70d45"} Nov 24 12:47:56 crc kubenswrapper[4756]: I1124 12:47:56.840942 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tw58s"] Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.496948 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerStarted","Data":"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4"} Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.498503 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerStarted","Data":"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d"} Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.499491 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tw58s" event={"ID":"573ef3b1-3c55-4e67-9df0-d52895183be8","Type":"ContainerStarted","Data":"790f35ce6630d64bd630a8b33b7865a59b54f5aebb2f69576d79a4bdaaddc318"} Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.499525 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tw58s" event={"ID":"573ef3b1-3c55-4e67-9df0-d52895183be8","Type":"ContainerStarted","Data":"50268659265d8c7d0ec54035cba23f2107f1a3e26cbfb547743f55e72054fd9d"} Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.530412 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.530387971 podStartE2EDuration="2.530387971s" podCreationTimestamp="2025-11-24 12:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:57.516544027 +0000 UTC m=+1209.874058179" watchObservedRunningTime="2025-11-24 12:47:57.530387971 +0000 UTC m=+1209.887902113" Nov 24 12:47:57 crc kubenswrapper[4756]: I1124 12:47:57.551011 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tw58s" podStartSLOduration=2.550015862 podStartE2EDuration="2.550015862s" podCreationTimestamp="2025-11-24 12:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:47:57.537618097 +0000 UTC m=+1209.895132239" watchObservedRunningTime="2025-11-24 12:47:57.550015862 +0000 UTC m=+1209.907530004" Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.513538 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerStarted","Data":"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c"} Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.513948 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-central-agent" containerID="cri-o://7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4" gracePeriod=30 Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.514102 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-notification-agent" containerID="cri-o://271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de" gracePeriod=30 Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.514057 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="sg-core" containerID="cri-o://d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f" gracePeriod=30 Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.514264 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="proxy-httpd" containerID="cri-o://8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c" gracePeriod=30 Nov 24 12:47:58 crc kubenswrapper[4756]: I1124 12:47:58.567097 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.264276125 podStartE2EDuration="6.567073798s" podCreationTimestamp="2025-11-24 12:47:52 +0000 UTC" firstStartedPulling="2025-11-24 12:47:53.598989976 +0000 UTC m=+1205.956504118" lastFinishedPulling="2025-11-24 12:47:57.901787649 +0000 UTC m=+1210.259301791" observedRunningTime="2025-11-24 12:47:58.555952048 +0000 UTC m=+1210.913466190" watchObservedRunningTime="2025-11-24 12:47:58.567073798 +0000 UTC m=+1210.924587940" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.078310 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.156535 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.159046 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="dnsmasq-dns" containerID="cri-o://ddbfb3777c6aec779b6c09f89bddd566114d8a482dc656a6ce3c308d7c9e1079" gracePeriod=10 Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.529315 4756 generic.go:334] "Generic (PLEG): container finished" podID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerID="ddbfb3777c6aec779b6c09f89bddd566114d8a482dc656a6ce3c308d7c9e1079" exitCode=0 Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.531885 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" event={"ID":"23a10485-152b-4bf5-bb3d-49fe345f390e","Type":"ContainerDied","Data":"ddbfb3777c6aec779b6c09f89bddd566114d8a482dc656a6ce3c308d7c9e1079"} Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537286 4756 generic.go:334] "Generic (PLEG): container finished" podID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerID="8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c" exitCode=0 Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537333 4756 generic.go:334] "Generic (PLEG): container finished" podID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerID="d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f" exitCode=2 Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537344 4756 generic.go:334] "Generic (PLEG): container finished" podID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerID="271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de" exitCode=0 Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537367 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerDied","Data":"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c"} Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerDied","Data":"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f"} Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.537412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerDied","Data":"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de"} Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.728931 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.818831 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.818917 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqxs5\" (UniqueName: \"kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.819132 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.819282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.819353 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.819557 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config\") pod \"23a10485-152b-4bf5-bb3d-49fe345f390e\" (UID: \"23a10485-152b-4bf5-bb3d-49fe345f390e\") " Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.834123 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5" (OuterVolumeSpecName: "kube-api-access-vqxs5") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "kube-api-access-vqxs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.892879 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.901840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config" (OuterVolumeSpecName: "config") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.902920 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.909007 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.914044 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23a10485-152b-4bf5-bb3d-49fe345f390e" (UID: "23a10485-152b-4bf5-bb3d-49fe345f390e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924076 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924125 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924140 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqxs5\" (UniqueName: \"kubernetes.io/projected/23a10485-152b-4bf5-bb3d-49fe345f390e-kube-api-access-vqxs5\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924152 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924184 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:47:59 crc kubenswrapper[4756]: I1124 12:47:59.924193 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a10485-152b-4bf5-bb3d-49fe345f390e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.553331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" event={"ID":"23a10485-152b-4bf5-bb3d-49fe345f390e","Type":"ContainerDied","Data":"30c957b4daa394124a8140c9657be2fff414aec41b829179e49e4e8465496324"} Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.553647 4756 scope.go:117] "RemoveContainer" containerID="ddbfb3777c6aec779b6c09f89bddd566114d8a482dc656a6ce3c308d7c9e1079" Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.553417 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-lfz22" Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.585903 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.589383 4756 scope.go:117] "RemoveContainer" containerID="a7d14867dea67d3d6891a305d2fc8810cf67746628d5afe7e630c687daa4f02a" Nov 24 12:48:00 crc kubenswrapper[4756]: I1124 12:48:00.599283 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-lfz22"] Nov 24 12:48:00 crc kubenswrapper[4756]: E1124 12:48:00.668874 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23a10485_152b_4bf5_bb3d_49fe345f390e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23a10485_152b_4bf5_bb3d_49fe345f390e.slice/crio-30c957b4daa394124a8140c9657be2fff414aec41b829179e49e4e8465496324\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod400bd5ca_e7cd_4be6_9a34_2c90c3c05839.slice/crio-conmon-7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.077133 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154119 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154633 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4pkf\" (UniqueName: \"kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154656 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154710 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.154804 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.155039 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.155120 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd\") pod \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\" (UID: \"400bd5ca-e7cd-4be6-9a34-2c90c3c05839\") " Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.155212 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.155771 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.156104 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.162486 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf" (OuterVolumeSpecName: "kube-api-access-f4pkf") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "kube-api-access-f4pkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.182184 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts" (OuterVolumeSpecName: "scripts") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.193553 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.234294 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257464 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257500 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257535 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257546 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257558 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4pkf\" (UniqueName: \"kubernetes.io/projected/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-kube-api-access-f4pkf\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.257573 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.315446 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data" (OuterVolumeSpecName: "config-data") pod "400bd5ca-e7cd-4be6-9a34-2c90c3c05839" (UID: "400bd5ca-e7cd-4be6-9a34-2c90c3c05839"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.359199 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.359255 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400bd5ca-e7cd-4be6-9a34-2c90c3c05839-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.575148 4756 generic.go:334] "Generic (PLEG): container finished" podID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerID="7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4" exitCode=0 Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.575247 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerDied","Data":"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4"} Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.575286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"400bd5ca-e7cd-4be6-9a34-2c90c3c05839","Type":"ContainerDied","Data":"aa9496b24435dd883d77b644440853ea3b9737d3e15bb7bf641e9d8dcadb6c70"} Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.575305 4756 scope.go:117] "RemoveContainer" containerID="8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.575356 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.608241 4756 scope.go:117] "RemoveContainer" containerID="d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.628315 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.637309 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.653864 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659144 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="init" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659188 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="init" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659204 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="sg-core" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659211 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="sg-core" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659227 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="proxy-httpd" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659233 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="proxy-httpd" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659249 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="dnsmasq-dns" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659257 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="dnsmasq-dns" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659292 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-notification-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659298 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-notification-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.659319 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-central-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659324 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-central-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659662 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" containerName="dnsmasq-dns" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659679 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-central-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659689 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="ceilometer-notification-agent" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659699 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="sg-core" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.659707 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" containerName="proxy-httpd" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.664547 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.668291 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.668402 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.668537 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.677119 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.678270 4756 scope.go:117] "RemoveContainer" containerID="271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.707490 4756 scope.go:117] "RemoveContainer" containerID="7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.766934 4756 scope.go:117] "RemoveContainer" containerID="8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.767396 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c\": container with ID starting with 8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c not found: ID does not exist" containerID="8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.767426 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c"} err="failed to get container status \"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c\": rpc error: code = NotFound desc = could not find container \"8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c\": container with ID starting with 8357ee10de7fee251dc020886070043e387a93836957887e5d2b1a1c1a98571c not found: ID does not exist" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.767471 4756 scope.go:117] "RemoveContainer" containerID="d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.767933 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f\": container with ID starting with d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f not found: ID does not exist" containerID="d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.767990 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f"} err="failed to get container status \"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f\": rpc error: code = NotFound desc = could not find container \"d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f\": container with ID starting with d5e67aaeff9fb805c223619c2d6b23b26d36d2aa40a3d61adbfb775513442f9f not found: ID does not exist" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.768008 4756 scope.go:117] "RemoveContainer" containerID="271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.768371 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de\": container with ID starting with 271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de not found: ID does not exist" containerID="271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.768429 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de"} err="failed to get container status \"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de\": rpc error: code = NotFound desc = could not find container \"271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de\": container with ID starting with 271104b29ba10a8ce9126ff2864598b444ef857a4bceeb84dd714a490dd983de not found: ID does not exist" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.768449 4756 scope.go:117] "RemoveContainer" containerID="7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4" Nov 24 12:48:01 crc kubenswrapper[4756]: E1124 12:48:01.768723 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4\": container with ID starting with 7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4 not found: ID does not exist" containerID="7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.768750 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4"} err="failed to get container status \"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4\": rpc error: code = NotFound desc = could not find container \"7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4\": container with ID starting with 7765bc830e65f80c0e3f788a5024cbd085ed2bf6c177db5118ebc213ac3ff2b4 not found: ID does not exist" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.768965 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-scripts\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-log-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769324 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzjpl\" (UniqueName: \"kubernetes.io/projected/f1e15270-1d58-42bb-ad0a-635726bae163-kube-api-access-gzjpl\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769459 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769539 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769570 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-config-data\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.769628 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-run-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.874291 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.874493 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.874543 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-config-data\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.874670 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.874796 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-run-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.875330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-scripts\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.875560 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-log-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.875960 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzjpl\" (UniqueName: \"kubernetes.io/projected/f1e15270-1d58-42bb-ad0a-635726bae163-kube-api-access-gzjpl\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.876654 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-log-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.876789 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1e15270-1d58-42bb-ad0a-635726bae163-run-httpd\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.881790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.881874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-scripts\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.882803 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-config-data\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.884119 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.897645 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1e15270-1d58-42bb-ad0a-635726bae163-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.904284 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzjpl\" (UniqueName: \"kubernetes.io/projected/f1e15270-1d58-42bb-ad0a-635726bae163-kube-api-access-gzjpl\") pod \"ceilometer-0\" (UID: \"f1e15270-1d58-42bb-ad0a-635726bae163\") " pod="openstack/ceilometer-0" Nov 24 12:48:01 crc kubenswrapper[4756]: I1124 12:48:01.988443 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 12:48:02 crc kubenswrapper[4756]: W1124 12:48:02.460268 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1e15270_1d58_42bb_ad0a_635726bae163.slice/crio-07435090e5c4813ef74d2c466f09fb469c25a6cecacc9bff2ef50ad7ecd1e4c3 WatchSource:0}: Error finding container 07435090e5c4813ef74d2c466f09fb469c25a6cecacc9bff2ef50ad7ecd1e4c3: Status 404 returned error can't find the container with id 07435090e5c4813ef74d2c466f09fb469c25a6cecacc9bff2ef50ad7ecd1e4c3 Nov 24 12:48:02 crc kubenswrapper[4756]: I1124 12:48:02.461095 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 12:48:02 crc kubenswrapper[4756]: I1124 12:48:02.510285 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a10485-152b-4bf5-bb3d-49fe345f390e" path="/var/lib/kubelet/pods/23a10485-152b-4bf5-bb3d-49fe345f390e/volumes" Nov 24 12:48:02 crc kubenswrapper[4756]: I1124 12:48:02.512481 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400bd5ca-e7cd-4be6-9a34-2c90c3c05839" path="/var/lib/kubelet/pods/400bd5ca-e7cd-4be6-9a34-2c90c3c05839/volumes" Nov 24 12:48:02 crc kubenswrapper[4756]: I1124 12:48:02.589035 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1e15270-1d58-42bb-ad0a-635726bae163","Type":"ContainerStarted","Data":"07435090e5c4813ef74d2c466f09fb469c25a6cecacc9bff2ef50ad7ecd1e4c3"} Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.479587 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.479901 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.479940 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.480883 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.480952 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629" gracePeriod=600 Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.609757 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1e15270-1d58-42bb-ad0a-635726bae163","Type":"ContainerStarted","Data":"414e45c6df8c1e7ff35cc945b75a16150ba44cdb1a6cdd34ff0656c849c1ece6"} Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.612361 4756 generic.go:334] "Generic (PLEG): container finished" podID="573ef3b1-3c55-4e67-9df0-d52895183be8" containerID="790f35ce6630d64bd630a8b33b7865a59b54f5aebb2f69576d79a4bdaaddc318" exitCode=0 Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.612417 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tw58s" event={"ID":"573ef3b1-3c55-4e67-9df0-d52895183be8","Type":"ContainerDied","Data":"790f35ce6630d64bd630a8b33b7865a59b54f5aebb2f69576d79a4bdaaddc318"} Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.615571 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629" exitCode=0 Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.615616 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629"} Nov 24 12:48:03 crc kubenswrapper[4756]: I1124 12:48:03.615647 4756 scope.go:117] "RemoveContainer" containerID="07694ef974a30730903412aa5f6b85b0f7a0adf88d6a936a30f315e540f03ca9" Nov 24 12:48:04 crc kubenswrapper[4756]: I1124 12:48:04.628646 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883"} Nov 24 12:48:04 crc kubenswrapper[4756]: I1124 12:48:04.631775 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1e15270-1d58-42bb-ad0a-635726bae163","Type":"ContainerStarted","Data":"e646403a21c559460459b95eaf7ff9e94401cd8464be2e5f6fde10ee0b1b45a9"} Nov 24 12:48:04 crc kubenswrapper[4756]: I1124 12:48:04.631821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1e15270-1d58-42bb-ad0a-635726bae163","Type":"ContainerStarted","Data":"5bdb28dfa05f61b07d13c0194961e5c85e3ad3bb2c8e9160739593404a03eb1e"} Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.017923 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.154451 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt4qt\" (UniqueName: \"kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt\") pod \"573ef3b1-3c55-4e67-9df0-d52895183be8\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.154565 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts\") pod \"573ef3b1-3c55-4e67-9df0-d52895183be8\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.154785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle\") pod \"573ef3b1-3c55-4e67-9df0-d52895183be8\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.154855 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data\") pod \"573ef3b1-3c55-4e67-9df0-d52895183be8\" (UID: \"573ef3b1-3c55-4e67-9df0-d52895183be8\") " Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.162696 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt" (OuterVolumeSpecName: "kube-api-access-jt4qt") pod "573ef3b1-3c55-4e67-9df0-d52895183be8" (UID: "573ef3b1-3c55-4e67-9df0-d52895183be8"). InnerVolumeSpecName "kube-api-access-jt4qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.178136 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts" (OuterVolumeSpecName: "scripts") pod "573ef3b1-3c55-4e67-9df0-d52895183be8" (UID: "573ef3b1-3c55-4e67-9df0-d52895183be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.204166 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573ef3b1-3c55-4e67-9df0-d52895183be8" (UID: "573ef3b1-3c55-4e67-9df0-d52895183be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.204224 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data" (OuterVolumeSpecName: "config-data") pod "573ef3b1-3c55-4e67-9df0-d52895183be8" (UID: "573ef3b1-3c55-4e67-9df0-d52895183be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.257704 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.257737 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.257749 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573ef3b1-3c55-4e67-9df0-d52895183be8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.257772 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt4qt\" (UniqueName: \"kubernetes.io/projected/573ef3b1-3c55-4e67-9df0-d52895183be8-kube-api-access-jt4qt\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.645050 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tw58s" event={"ID":"573ef3b1-3c55-4e67-9df0-d52895183be8","Type":"ContainerDied","Data":"50268659265d8c7d0ec54035cba23f2107f1a3e26cbfb547743f55e72054fd9d"} Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.645717 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50268659265d8c7d0ec54035cba23f2107f1a3e26cbfb547743f55e72054fd9d" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.645066 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tw58s" Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.836780 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.837062 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-log" containerID="cri-o://2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" gracePeriod=30 Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.837617 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-api" containerID="cri-o://60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" gracePeriod=30 Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.859010 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.859293 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" containerName="nova-scheduler-scheduler" containerID="cri-o://6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" gracePeriod=30 Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.868978 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.869513 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" containerID="cri-o://30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a" gracePeriod=30 Nov 24 12:48:05 crc kubenswrapper[4756]: I1124 12:48:05.869588 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" containerID="cri-o://d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e" gracePeriod=30 Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.187798 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.191599 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.193501 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.193590 4756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" containerName="nova-scheduler-scheduler" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.398692 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.484632 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9m66\" (UniqueName: \"kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.485098 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.485255 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.486352 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs" (OuterVolumeSpecName: "logs") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.486699 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.487131 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.487506 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle\") pod \"11294935-80cc-4190-9ebe-8a76e8abe384\" (UID: \"11294935-80cc-4190-9ebe-8a76e8abe384\") " Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.491666 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11294935-80cc-4190-9ebe-8a76e8abe384-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.491876 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66" (OuterVolumeSpecName: "kube-api-access-w9m66") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "kube-api-access-w9m66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.519479 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.526411 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data" (OuterVolumeSpecName: "config-data") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.549351 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.554746 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "11294935-80cc-4190-9ebe-8a76e8abe384" (UID: "11294935-80cc-4190-9ebe-8a76e8abe384"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.593740 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.593799 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.593816 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9m66\" (UniqueName: \"kubernetes.io/projected/11294935-80cc-4190-9ebe-8a76e8abe384-kube-api-access-w9m66\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.593828 4756 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.593839 4756 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11294935-80cc-4190-9ebe-8a76e8abe384-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660172 4756 generic.go:334] "Generic (PLEG): container finished" podID="11294935-80cc-4190-9ebe-8a76e8abe384" containerID="60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" exitCode=0 Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660206 4756 generic.go:334] "Generic (PLEG): container finished" podID="11294935-80cc-4190-9ebe-8a76e8abe384" containerID="2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" exitCode=143 Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerDied","Data":"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4"} Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660276 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerDied","Data":"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d"} Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11294935-80cc-4190-9ebe-8a76e8abe384","Type":"ContainerDied","Data":"8da5e44339a7ed97f10ac41eccc6b4853b0c2fa25d5584fdcff9ce5a52d70d45"} Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660300 4756 scope.go:117] "RemoveContainer" containerID="60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.660438 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.670775 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerID="30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a" exitCode=143 Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.670851 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerDied","Data":"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a"} Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.675974 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1e15270-1d58-42bb-ad0a-635726bae163","Type":"ContainerStarted","Data":"f68208be4d2ad954f4b4225f1c2d0f61bd6824d8bd79186bb7e3ffc12ec7a364"} Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.676383 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.695901 4756 scope.go:117] "RemoveContainer" containerID="2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.717269 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.724655 4756 scope.go:117] "RemoveContainer" containerID="60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.725111 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4\": container with ID starting with 60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4 not found: ID does not exist" containerID="60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725147 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4"} err="failed to get container status \"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4\": rpc error: code = NotFound desc = could not find container \"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4\": container with ID starting with 60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4 not found: ID does not exist" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725188 4756 scope.go:117] "RemoveContainer" containerID="2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.725444 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d\": container with ID starting with 2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d not found: ID does not exist" containerID="2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725469 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d"} err="failed to get container status \"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d\": rpc error: code = NotFound desc = could not find container \"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d\": container with ID starting with 2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d not found: ID does not exist" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725487 4756 scope.go:117] "RemoveContainer" containerID="60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725838 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4"} err="failed to get container status \"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4\": rpc error: code = NotFound desc = could not find container \"60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4\": container with ID starting with 60fff8f3484c8044974678c5e5d7173827ae79e83f5b5674e29ee1c84c6330b4 not found: ID does not exist" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.725859 4756 scope.go:117] "RemoveContainer" containerID="2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.726100 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d"} err="failed to get container status \"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d\": rpc error: code = NotFound desc = could not find container \"2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d\": container with ID starting with 2000af77e9b1b2ff90d1b884e384c455b6d57ac720db0dcc1dd6f97802e9dd3d not found: ID does not exist" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.733864 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.744272 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.744826 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573ef3b1-3c55-4e67-9df0-d52895183be8" containerName="nova-manage" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.744847 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="573ef3b1-3c55-4e67-9df0-d52895183be8" containerName="nova-manage" Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.744863 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-log" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.744871 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-log" Nov 24 12:48:06 crc kubenswrapper[4756]: E1124 12:48:06.744891 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-api" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.744898 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-api" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.745129 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-api" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.745171 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" containerName="nova-api-log" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.745194 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="573ef3b1-3c55-4e67-9df0-d52895183be8" containerName="nova-manage" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.746490 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.749748 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.749913 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.750424 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.751184 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.493647909 podStartE2EDuration="5.751147873s" podCreationTimestamp="2025-11-24 12:48:01 +0000 UTC" firstStartedPulling="2025-11-24 12:48:02.466281295 +0000 UTC m=+1214.823795437" lastFinishedPulling="2025-11-24 12:48:05.723781259 +0000 UTC m=+1218.081295401" observedRunningTime="2025-11-24 12:48:06.713775153 +0000 UTC m=+1219.071289295" watchObservedRunningTime="2025-11-24 12:48:06.751147873 +0000 UTC m=+1219.108662015" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.762257 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900482 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-config-data\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8190147-b7ca-47e1-86f0-54dad2dbc996-logs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900840 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6m59\" (UniqueName: \"kubernetes.io/projected/c8190147-b7ca-47e1-86f0-54dad2dbc996-kube-api-access-s6m59\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900871 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900941 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:06 crc kubenswrapper[4756]: I1124 12:48:06.900975 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002411 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002500 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002554 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-config-data\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002583 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8190147-b7ca-47e1-86f0-54dad2dbc996-logs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002623 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6m59\" (UniqueName: \"kubernetes.io/projected/c8190147-b7ca-47e1-86f0-54dad2dbc996-kube-api-access-s6m59\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.002651 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.003994 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8190147-b7ca-47e1-86f0-54dad2dbc996-logs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.007901 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.008310 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.008756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-config-data\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.009536 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8190147-b7ca-47e1-86f0-54dad2dbc996-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.024684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6m59\" (UniqueName: \"kubernetes.io/projected/c8190147-b7ca-47e1-86f0-54dad2dbc996-kube-api-access-s6m59\") pod \"nova-api-0\" (UID: \"c8190147-b7ca-47e1-86f0-54dad2dbc996\") " pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.067189 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.643709 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 12:48:07 crc kubenswrapper[4756]: W1124 12:48:07.650612 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8190147_b7ca_47e1_86f0_54dad2dbc996.slice/crio-1b421cab6c2b9c5e21a8b9a39c9e4a71d60e57eaaf058b599076732ec1678d5c WatchSource:0}: Error finding container 1b421cab6c2b9c5e21a8b9a39c9e4a71d60e57eaaf058b599076732ec1678d5c: Status 404 returned error can't find the container with id 1b421cab6c2b9c5e21a8b9a39c9e4a71d60e57eaaf058b599076732ec1678d5c Nov 24 12:48:07 crc kubenswrapper[4756]: I1124 12:48:07.688312 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8190147-b7ca-47e1-86f0-54dad2dbc996","Type":"ContainerStarted","Data":"1b421cab6c2b9c5e21a8b9a39c9e4a71d60e57eaaf058b599076732ec1678d5c"} Nov 24 12:48:08 crc kubenswrapper[4756]: I1124 12:48:08.488069 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11294935-80cc-4190-9ebe-8a76e8abe384" path="/var/lib/kubelet/pods/11294935-80cc-4190-9ebe-8a76e8abe384/volumes" Nov 24 12:48:08 crc kubenswrapper[4756]: I1124 12:48:08.708174 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8190147-b7ca-47e1-86f0-54dad2dbc996","Type":"ContainerStarted","Data":"f446f351fea31e4037a6173697fa099f479f73bd4f3405aa0c20dd8990b2831c"} Nov 24 12:48:08 crc kubenswrapper[4756]: I1124 12:48:08.708229 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8190147-b7ca-47e1-86f0-54dad2dbc996","Type":"ContainerStarted","Data":"3aa2de2f753c0c4e996dc36a06fff5a9225e8b04a301425aed681a532dea0cd0"} Nov 24 12:48:08 crc kubenswrapper[4756]: I1124 12:48:08.740998 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.740969308 podStartE2EDuration="2.740969308s" podCreationTimestamp="2025-11-24 12:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:48:08.737428712 +0000 UTC m=+1221.094942854" watchObservedRunningTime="2025-11-24 12:48:08.740969308 +0000 UTC m=+1221.098483450" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.014483 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:41724->10.217.0.209:8775: read: connection reset by peer" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.014516 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:41722->10.217.0.209:8775: read: connection reset by peer" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.529145 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.665628 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle\") pod \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.665770 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data\") pod \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.665813 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs\") pod \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.665848 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs\") pod \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.665903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9fj\" (UniqueName: \"kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj\") pod \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\" (UID: \"dc18efc1-65e5-4ce5-9514-be7474d3f8bb\") " Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.666456 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs" (OuterVolumeSpecName: "logs") pod "dc18efc1-65e5-4ce5-9514-be7474d3f8bb" (UID: "dc18efc1-65e5-4ce5-9514-be7474d3f8bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.666637 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.672464 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj" (OuterVolumeSpecName: "kube-api-access-nm9fj") pod "dc18efc1-65e5-4ce5-9514-be7474d3f8bb" (UID: "dc18efc1-65e5-4ce5-9514-be7474d3f8bb"). InnerVolumeSpecName "kube-api-access-nm9fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.700430 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data" (OuterVolumeSpecName: "config-data") pod "dc18efc1-65e5-4ce5-9514-be7474d3f8bb" (UID: "dc18efc1-65e5-4ce5-9514-be7474d3f8bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.708994 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc18efc1-65e5-4ce5-9514-be7474d3f8bb" (UID: "dc18efc1-65e5-4ce5-9514-be7474d3f8bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.724699 4756 generic.go:334] "Generic (PLEG): container finished" podID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerID="d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e" exitCode=0 Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.724826 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerDied","Data":"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e"} Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.724893 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc18efc1-65e5-4ce5-9514-be7474d3f8bb","Type":"ContainerDied","Data":"8de28ac73810d635636fa35e1601e019bbdd169868542750e1ee7a078295118d"} Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.724918 4756 scope.go:117] "RemoveContainer" containerID="d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.725173 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.747081 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc18efc1-65e5-4ce5-9514-be7474d3f8bb" (UID: "dc18efc1-65e5-4ce5-9514-be7474d3f8bb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.810716 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.810761 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm9fj\" (UniqueName: \"kubernetes.io/projected/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-kube-api-access-nm9fj\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.810774 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.810787 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc18efc1-65e5-4ce5-9514-be7474d3f8bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.835209 4756 scope.go:117] "RemoveContainer" containerID="30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.862379 4756 scope.go:117] "RemoveContainer" containerID="d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e" Nov 24 12:48:09 crc kubenswrapper[4756]: E1124 12:48:09.863200 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e\": container with ID starting with d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e not found: ID does not exist" containerID="d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.863244 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e"} err="failed to get container status \"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e\": rpc error: code = NotFound desc = could not find container \"d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e\": container with ID starting with d2b2d46c491c0ec947aa60a55c240d4249f48b2f69c968e94faafe45bbe4fd0e not found: ID does not exist" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.863271 4756 scope.go:117] "RemoveContainer" containerID="30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a" Nov 24 12:48:09 crc kubenswrapper[4756]: E1124 12:48:09.863510 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a\": container with ID starting with 30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a not found: ID does not exist" containerID="30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a" Nov 24 12:48:09 crc kubenswrapper[4756]: I1124 12:48:09.863535 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a"} err="failed to get container status \"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a\": rpc error: code = NotFound desc = could not find container \"30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a\": container with ID starting with 30585770cca1a3a74ce55b4dba7d78f522f6d4d1cdc4154e6b545c4478d6657a not found: ID does not exist" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.069173 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.083086 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.092505 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:10 crc kubenswrapper[4756]: E1124 12:48:10.093095 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.093112 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" Nov 24 12:48:10 crc kubenswrapper[4756]: E1124 12:48:10.093140 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.093146 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.093509 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-metadata" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.093533 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" containerName="nova-metadata-log" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.094924 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.098493 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.098894 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.100909 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.227076 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-config-data\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.227171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frd5l\" (UniqueName: \"kubernetes.io/projected/fbc35708-5fe2-4f73-b7f0-958f40e12f63-kube-api-access-frd5l\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.227274 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc35708-5fe2-4f73-b7f0-958f40e12f63-logs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.227376 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.227726 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.329880 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc35708-5fe2-4f73-b7f0-958f40e12f63-logs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.329994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.330112 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.330833 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbc35708-5fe2-4f73-b7f0-958f40e12f63-logs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.331327 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-config-data\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.331390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frd5l\" (UniqueName: \"kubernetes.io/projected/fbc35708-5fe2-4f73-b7f0-958f40e12f63-kube-api-access-frd5l\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.335802 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-config-data\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.336412 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.347537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbc35708-5fe2-4f73-b7f0-958f40e12f63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.350251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frd5l\" (UniqueName: \"kubernetes.io/projected/fbc35708-5fe2-4f73-b7f0-958f40e12f63-kube-api-access-frd5l\") pod \"nova-metadata-0\" (UID: \"fbc35708-5fe2-4f73-b7f0-958f40e12f63\") " pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.487633 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc18efc1-65e5-4ce5-9514-be7474d3f8bb" path="/var/lib/kubelet/pods/dc18efc1-65e5-4ce5-9514-be7474d3f8bb/volumes" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.563767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.565666 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.738469 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle\") pod \"c6324dec-5a51-4c52-be79-9ff505e69807\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.738519 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrh7\" (UniqueName: \"kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7\") pod \"c6324dec-5a51-4c52-be79-9ff505e69807\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.738546 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data\") pod \"c6324dec-5a51-4c52-be79-9ff505e69807\" (UID: \"c6324dec-5a51-4c52-be79-9ff505e69807\") " Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.744756 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7" (OuterVolumeSpecName: "kube-api-access-9nrh7") pod "c6324dec-5a51-4c52-be79-9ff505e69807" (UID: "c6324dec-5a51-4c52-be79-9ff505e69807"). InnerVolumeSpecName "kube-api-access-9nrh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.751548 4756 generic.go:334] "Generic (PLEG): container finished" podID="c6324dec-5a51-4c52-be79-9ff505e69807" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" exitCode=0 Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.751620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6324dec-5a51-4c52-be79-9ff505e69807","Type":"ContainerDied","Data":"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394"} Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.751654 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c6324dec-5a51-4c52-be79-9ff505e69807","Type":"ContainerDied","Data":"66c87cd15a731935ad88b6ce9290a4098232a7082ad6f34d4984070257cd840d"} Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.751676 4756 scope.go:117] "RemoveContainer" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.751713 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.788818 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data" (OuterVolumeSpecName: "config-data") pod "c6324dec-5a51-4c52-be79-9ff505e69807" (UID: "c6324dec-5a51-4c52-be79-9ff505e69807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.806899 4756 scope.go:117] "RemoveContainer" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" Nov 24 12:48:10 crc kubenswrapper[4756]: E1124 12:48:10.807414 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394\": container with ID starting with 6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394 not found: ID does not exist" containerID="6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.807447 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394"} err="failed to get container status \"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394\": rpc error: code = NotFound desc = could not find container \"6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394\": container with ID starting with 6655a751b8d42860db35b417325c077c829cc58bc4eb48eb5915cef7dec37394 not found: ID does not exist" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.808062 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6324dec-5a51-4c52-be79-9ff505e69807" (UID: "c6324dec-5a51-4c52-be79-9ff505e69807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.843719 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.843755 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrh7\" (UniqueName: \"kubernetes.io/projected/c6324dec-5a51-4c52-be79-9ff505e69807-kube-api-access-9nrh7\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:10 crc kubenswrapper[4756]: I1124 12:48:10.843767 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6324dec-5a51-4c52-be79-9ff505e69807-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:11 crc kubenswrapper[4756]: W1124 12:48:11.061621 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbc35708_5fe2_4f73_b7f0_958f40e12f63.slice/crio-3197534bbe3986ee555c5f8495bca57f0f1f7e8a190d1f2116f408c7da88ea10 WatchSource:0}: Error finding container 3197534bbe3986ee555c5f8495bca57f0f1f7e8a190d1f2116f408c7da88ea10: Status 404 returned error can't find the container with id 3197534bbe3986ee555c5f8495bca57f0f1f7e8a190d1f2116f408c7da88ea10 Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.061853 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.098191 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.114604 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.143360 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:11 crc kubenswrapper[4756]: E1124 12:48:11.144003 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" containerName="nova-scheduler-scheduler" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.144024 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" containerName="nova-scheduler-scheduler" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.144291 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" containerName="nova-scheduler-scheduler" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.145045 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.157425 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.167946 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.253681 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-config-data\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.254330 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6mc\" (UniqueName: \"kubernetes.io/projected/18d356c0-8e84-4ec3-b61c-bef4f3906505-kube-api-access-jg6mc\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.254536 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.356755 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.356863 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-config-data\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.356953 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6mc\" (UniqueName: \"kubernetes.io/projected/18d356c0-8e84-4ec3-b61c-bef4f3906505-kube-api-access-jg6mc\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.366684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.382324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18d356c0-8e84-4ec3-b61c-bef4f3906505-config-data\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.394424 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6mc\" (UniqueName: \"kubernetes.io/projected/18d356c0-8e84-4ec3-b61c-bef4f3906505-kube-api-access-jg6mc\") pod \"nova-scheduler-0\" (UID: \"18d356c0-8e84-4ec3-b61c-bef4f3906505\") " pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.524883 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.770681 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbc35708-5fe2-4f73-b7f0-958f40e12f63","Type":"ContainerStarted","Data":"91807750633aa05a2fbcd3ab793c1f5b8f2fd3b53aae0266ce19d510d12821c7"} Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.770732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbc35708-5fe2-4f73-b7f0-958f40e12f63","Type":"ContainerStarted","Data":"3cf1a729c1f514812c5394c2366b915f2832b25dde4bc1aed0bec1317acf1fcf"} Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.770743 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbc35708-5fe2-4f73-b7f0-958f40e12f63","Type":"ContainerStarted","Data":"3197534bbe3986ee555c5f8495bca57f0f1f7e8a190d1f2116f408c7da88ea10"} Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.797330 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.797307526 podStartE2EDuration="1.797307526s" podCreationTimestamp="2025-11-24 12:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:48:11.795805345 +0000 UTC m=+1224.153319497" watchObservedRunningTime="2025-11-24 12:48:11.797307526 +0000 UTC m=+1224.154821678" Nov 24 12:48:11 crc kubenswrapper[4756]: W1124 12:48:11.993607 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18d356c0_8e84_4ec3_b61c_bef4f3906505.slice/crio-a6e1e83f13517c7a7b7d746aaa84d8d63e7cf103f28f614aa5fd688efcd56e0f WatchSource:0}: Error finding container a6e1e83f13517c7a7b7d746aaa84d8d63e7cf103f28f614aa5fd688efcd56e0f: Status 404 returned error can't find the container with id a6e1e83f13517c7a7b7d746aaa84d8d63e7cf103f28f614aa5fd688efcd56e0f Nov 24 12:48:11 crc kubenswrapper[4756]: I1124 12:48:11.998619 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 12:48:12 crc kubenswrapper[4756]: I1124 12:48:12.490805 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6324dec-5a51-4c52-be79-9ff505e69807" path="/var/lib/kubelet/pods/c6324dec-5a51-4c52-be79-9ff505e69807/volumes" Nov 24 12:48:12 crc kubenswrapper[4756]: I1124 12:48:12.782511 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18d356c0-8e84-4ec3-b61c-bef4f3906505","Type":"ContainerStarted","Data":"b4c74b86ccfaf4dc876744118ed9f2a1a877c296086bf5159c5291f97314d4b3"} Nov 24 12:48:12 crc kubenswrapper[4756]: I1124 12:48:12.782598 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"18d356c0-8e84-4ec3-b61c-bef4f3906505","Type":"ContainerStarted","Data":"a6e1e83f13517c7a7b7d746aaa84d8d63e7cf103f28f614aa5fd688efcd56e0f"} Nov 24 12:48:12 crc kubenswrapper[4756]: I1124 12:48:12.805278 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8052577859999999 podStartE2EDuration="1.805257786s" podCreationTimestamp="2025-11-24 12:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:48:12.797386553 +0000 UTC m=+1225.154900735" watchObservedRunningTime="2025-11-24 12:48:12.805257786 +0000 UTC m=+1225.162771928" Nov 24 12:48:15 crc kubenswrapper[4756]: I1124 12:48:15.564826 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:48:15 crc kubenswrapper[4756]: I1124 12:48:15.565404 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 12:48:16 crc kubenswrapper[4756]: I1124 12:48:16.525256 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 12:48:17 crc kubenswrapper[4756]: I1124 12:48:17.068775 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:48:17 crc kubenswrapper[4756]: I1124 12:48:17.069078 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 12:48:18 crc kubenswrapper[4756]: I1124 12:48:18.081417 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8190147-b7ca-47e1-86f0-54dad2dbc996" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:48:18 crc kubenswrapper[4756]: I1124 12:48:18.081405 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8190147-b7ca-47e1-86f0-54dad2dbc996" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:48:20 crc kubenswrapper[4756]: I1124 12:48:20.564899 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:48:20 crc kubenswrapper[4756]: I1124 12:48:20.565248 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 12:48:21 crc kubenswrapper[4756]: I1124 12:48:21.526220 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 12:48:21 crc kubenswrapper[4756]: I1124 12:48:21.556087 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 12:48:21 crc kubenswrapper[4756]: I1124 12:48:21.579698 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbc35708-5fe2-4f73-b7f0-958f40e12f63" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:48:21 crc kubenswrapper[4756]: I1124 12:48:21.579748 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbc35708-5fe2-4f73-b7f0-958f40e12f63" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 12:48:21 crc kubenswrapper[4756]: I1124 12:48:21.918793 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.074425 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.075354 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.080501 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.082562 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.948895 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 12:48:27 crc kubenswrapper[4756]: I1124 12:48:27.956315 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 12:48:30 crc kubenswrapper[4756]: I1124 12:48:30.569511 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:48:30 crc kubenswrapper[4756]: I1124 12:48:30.570093 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 12:48:30 crc kubenswrapper[4756]: I1124 12:48:30.575970 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:48:30 crc kubenswrapper[4756]: I1124 12:48:30.576243 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 12:48:31 crc kubenswrapper[4756]: I1124 12:48:31.999111 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 12:48:42 crc kubenswrapper[4756]: I1124 12:48:42.771596 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:43 crc kubenswrapper[4756]: I1124 12:48:43.662563 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:47 crc kubenswrapper[4756]: I1124 12:48:47.496880 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="rabbitmq" containerID="cri-o://fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530" gracePeriod=604796 Nov 24 12:48:48 crc kubenswrapper[4756]: I1124 12:48:48.160619 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="rabbitmq" containerID="cri-o://742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214" gracePeriod=604796 Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.045685 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161727 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161898 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161929 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.161988 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.162017 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.162083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.162130 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cxrz\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.162187 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.162203 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins\") pod \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\" (UID: \"eda12351-eabf-4909-a8fe-4cc2c3dabdb9\") " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.164631 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.165123 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.168168 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.172449 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.174078 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz" (OuterVolumeSpecName: "kube-api-access-4cxrz") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "kube-api-access-4cxrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.175084 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.178017 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.207153 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data" (OuterVolumeSpecName: "config-data") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.222413 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info" (OuterVolumeSpecName: "pod-info") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.227393 4756 generic.go:334] "Generic (PLEG): container finished" podID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerID="fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530" exitCode=0 Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.227432 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerDied","Data":"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530"} Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.227458 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eda12351-eabf-4909-a8fe-4cc2c3dabdb9","Type":"ContainerDied","Data":"e122b3ad9fd1fddd946e5e75697f44cc9ff33c2ffa575390e749a455431f7ff1"} Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.227475 4756 scope.go:117] "RemoveContainer" containerID="fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.227618 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.259126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf" (OuterVolumeSpecName: "server-conf") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277757 4756 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277784 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277811 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277825 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277838 4756 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277849 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cxrz\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-kube-api-access-4cxrz\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277860 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277872 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277883 4756 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.277894 4756 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.326140 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "eda12351-eabf-4909-a8fe-4cc2c3dabdb9" (UID: "eda12351-eabf-4909-a8fe-4cc2c3dabdb9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.335613 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.342330 4756 scope.go:117] "RemoveContainer" containerID="c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.380210 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.380253 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eda12351-eabf-4909-a8fe-4cc2c3dabdb9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.416385 4756 scope.go:117] "RemoveContainer" containerID="fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530" Nov 24 12:48:54 crc kubenswrapper[4756]: E1124 12:48:54.417039 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530\": container with ID starting with fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530 not found: ID does not exist" containerID="fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.417183 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530"} err="failed to get container status \"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530\": rpc error: code = NotFound desc = could not find container \"fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530\": container with ID starting with fc172aada172233b43b24d21dfeec3ae07cd76e8a3b14af0a6672ecdaa5a9530 not found: ID does not exist" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.417233 4756 scope.go:117] "RemoveContainer" containerID="c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317" Nov 24 12:48:54 crc kubenswrapper[4756]: E1124 12:48:54.418360 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317\": container with ID starting with c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317 not found: ID does not exist" containerID="c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.418416 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317"} err="failed to get container status \"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317\": rpc error: code = NotFound desc = could not find container \"c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317\": container with ID starting with c7e6c0443476a62093749ddba3449300f67259c6a11162bd7d5cfd9095a76317 not found: ID does not exist" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.566806 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.577984 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.602013 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:54 crc kubenswrapper[4756]: E1124 12:48:54.602579 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="rabbitmq" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.602597 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="rabbitmq" Nov 24 12:48:54 crc kubenswrapper[4756]: E1124 12:48:54.602655 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="setup-container" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.602663 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="setup-container" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.602911 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" containerName="rabbitmq" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.604183 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.608670 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.608845 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.616543 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.616610 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.616702 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.616832 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k7csk" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.616613 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.631495 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690480 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690544 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690573 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df531574-9350-4c19-bc09-b95744b731d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-kube-api-access-z48rr\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690683 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690718 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df531574-9350-4c19-bc09-b95744b731d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690765 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-config-data\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.690797 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795112 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795237 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df531574-9350-4c19-bc09-b95744b731d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795302 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-kube-api-access-z48rr\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795458 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795505 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795565 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df531574-9350-4c19-bc09-b95744b731d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795621 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-config-data\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.795668 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.797078 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.797220 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.797617 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.797808 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.797961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.798954 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-config-data\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.803545 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df531574-9350-4c19-bc09-b95744b731d0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.803717 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.806962 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df531574-9350-4c19-bc09-b95744b731d0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.813120 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df531574-9350-4c19-bc09-b95744b731d0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.820906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.838360 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.838719 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/df531574-9350-4c19-bc09-b95744b731d0-kube-api-access-z48rr\") pod \"rabbitmq-server-0\" (UID: \"df531574-9350-4c19-bc09-b95744b731d0\") " pod="openstack/rabbitmq-server-0" Nov 24 12:48:54 crc kubenswrapper[4756]: I1124 12:48:54.925578 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.034922 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104416 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104509 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104620 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104655 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104715 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104747 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzv5k\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104799 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104882 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.104938 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins\") pod \"12075358-f893-49bc-9ace-dda0ce2865ec\" (UID: \"12075358-f893-49bc-9ace-dda0ce2865ec\") " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.111224 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.111514 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.112144 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.114698 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.127343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k" (OuterVolumeSpecName: "kube-api-access-tzv5k") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "kube-api-access-tzv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.128561 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info" (OuterVolumeSpecName: "pod-info") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.128671 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.130036 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.168329 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data" (OuterVolumeSpecName: "config-data") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.194470 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf" (OuterVolumeSpecName: "server-conf") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209333 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209369 4756 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209380 4756 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12075358-f893-49bc-9ace-dda0ce2865ec-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209392 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209405 4756 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12075358-f893-49bc-9ace-dda0ce2865ec-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209416 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzv5k\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-kube-api-access-tzv5k\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209443 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209453 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209464 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.209474 4756 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12075358-f893-49bc-9ace-dda0ce2865ec-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.237791 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.250095 4756 generic.go:334] "Generic (PLEG): container finished" podID="12075358-f893-49bc-9ace-dda0ce2865ec" containerID="742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214" exitCode=0 Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.250149 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerDied","Data":"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214"} Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.250194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12075358-f893-49bc-9ace-dda0ce2865ec","Type":"ContainerDied","Data":"b1461fd0949631826e77090ec0eae9459065eb5544e43d0567b1df6be0ef3789"} Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.250214 4756 scope.go:117] "RemoveContainer" containerID="742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.250384 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.287824 4756 scope.go:117] "RemoveContainer" containerID="61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.289715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "12075358-f893-49bc-9ace-dda0ce2865ec" (UID: "12075358-f893-49bc-9ace-dda0ce2865ec"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.311662 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12075358-f893-49bc-9ace-dda0ce2865ec-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.311704 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.320219 4756 scope.go:117] "RemoveContainer" containerID="742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214" Nov 24 12:48:55 crc kubenswrapper[4756]: E1124 12:48:55.324312 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214\": container with ID starting with 742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214 not found: ID does not exist" containerID="742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.324369 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214"} err="failed to get container status \"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214\": rpc error: code = NotFound desc = could not find container \"742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214\": container with ID starting with 742e913f8e53724d19fed9df2c78a5575a02242db77eada142090a9e60473214 not found: ID does not exist" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.324400 4756 scope.go:117] "RemoveContainer" containerID="61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277" Nov 24 12:48:55 crc kubenswrapper[4756]: E1124 12:48:55.324750 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277\": container with ID starting with 61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277 not found: ID does not exist" containerID="61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.324783 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277"} err="failed to get container status \"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277\": rpc error: code = NotFound desc = could not find container \"61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277\": container with ID starting with 61f8a5a4378bc5c37ebb21ccd8540519b1be51c2229aba8976e4848e526ac277 not found: ID does not exist" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.455663 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:48:55 crc kubenswrapper[4756]: E1124 12:48:55.456344 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="setup-container" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.456425 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="setup-container" Nov 24 12:48:55 crc kubenswrapper[4756]: E1124 12:48:55.456510 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="rabbitmq" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.456564 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="rabbitmq" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.456838 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" containerName="rabbitmq" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.458170 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.464518 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.477482 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.492972 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.611786 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.616858 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.616965 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.617070 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7wm\" (UniqueName: \"kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.617102 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.617179 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.617249 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.617278 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.630363 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.640974 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.643012 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.648980 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.649384 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.649489 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.649383 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.649667 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.649669 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.650413 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mprjs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.654897 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c56614-94b5-4a4b-843b-0941f1899ad8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719689 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c56614-94b5-4a4b-843b-0941f1899ad8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719737 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719784 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719811 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719862 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719893 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719920 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719944 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.719994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720044 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720233 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476zd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-kube-api-access-476zd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720256 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7wm\" (UniqueName: \"kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.720325 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.721402 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.722079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.722767 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.723640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.724404 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.725248 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.747104 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7wm\" (UniqueName: \"kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm\") pod \"dnsmasq-dns-d558885bc-mmnqs\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.783685 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822470 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476zd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-kube-api-access-476zd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822517 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c56614-94b5-4a4b-843b-0941f1899ad8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822543 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c56614-94b5-4a4b-843b-0941f1899ad8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822570 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822589 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822619 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822689 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.822735 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.823263 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.823964 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.824144 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.824389 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c56614-94b5-4a4b-843b-0941f1899ad8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.824905 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.825264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.828088 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c56614-94b5-4a4b-843b-0941f1899ad8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.829104 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.831998 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.839227 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c56614-94b5-4a4b-843b-0941f1899ad8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.842225 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476zd\" (UniqueName: \"kubernetes.io/projected/80c56614-94b5-4a4b-843b-0941f1899ad8-kube-api-access-476zd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.919378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c56614-94b5-4a4b-843b-0941f1899ad8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:55 crc kubenswrapper[4756]: I1124 12:48:55.961846 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:48:56 crc kubenswrapper[4756]: I1124 12:48:56.276793 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df531574-9350-4c19-bc09-b95744b731d0","Type":"ContainerStarted","Data":"c16a86e7a8bab450ddd84d5aa51478809ce68f30206b10045adbbceaef1196ae"} Nov 24 12:48:56 crc kubenswrapper[4756]: I1124 12:48:56.298607 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:48:56 crc kubenswrapper[4756]: I1124 12:48:56.308674 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 12:48:56 crc kubenswrapper[4756]: I1124 12:48:56.487834 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12075358-f893-49bc-9ace-dda0ce2865ec" path="/var/lib/kubelet/pods/12075358-f893-49bc-9ace-dda0ce2865ec/volumes" Nov 24 12:48:56 crc kubenswrapper[4756]: I1124 12:48:56.489624 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda12351-eabf-4909-a8fe-4cc2c3dabdb9" path="/var/lib/kubelet/pods/eda12351-eabf-4909-a8fe-4cc2c3dabdb9/volumes" Nov 24 12:48:57 crc kubenswrapper[4756]: I1124 12:48:57.287550 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c56614-94b5-4a4b-843b-0941f1899ad8","Type":"ContainerStarted","Data":"b636e21dae8347436ded7eed654b98b017ea94013d51d00cff3d6fcbb4e3706c"} Nov 24 12:48:57 crc kubenswrapper[4756]: I1124 12:48:57.290917 4756 generic.go:334] "Generic (PLEG): container finished" podID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerID="823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2" exitCode=0 Nov 24 12:48:57 crc kubenswrapper[4756]: I1124 12:48:57.290953 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" event={"ID":"284cab86-c5c6-42e4-a83e-7c898ec04933","Type":"ContainerDied","Data":"823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2"} Nov 24 12:48:57 crc kubenswrapper[4756]: I1124 12:48:57.290977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" event={"ID":"284cab86-c5c6-42e4-a83e-7c898ec04933","Type":"ContainerStarted","Data":"1d291cd7b55b575e9f8b08257a89aaa183a42f09309556d6a4b5bc07f1daba74"} Nov 24 12:48:58 crc kubenswrapper[4756]: I1124 12:48:58.302608 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c56614-94b5-4a4b-843b-0941f1899ad8","Type":"ContainerStarted","Data":"ca98e1e3bbf1d85e8ec21ccb2c0a791220ddf31fb5c7fc4796ce57300b3bbb95"} Nov 24 12:48:58 crc kubenswrapper[4756]: I1124 12:48:58.305688 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" event={"ID":"284cab86-c5c6-42e4-a83e-7c898ec04933","Type":"ContainerStarted","Data":"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6"} Nov 24 12:48:58 crc kubenswrapper[4756]: I1124 12:48:58.305844 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:48:58 crc kubenswrapper[4756]: I1124 12:48:58.307830 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df531574-9350-4c19-bc09-b95744b731d0","Type":"ContainerStarted","Data":"4476ea27e107d10d1e3100e112dc849cd3e286dbfc91f2f8ffa3ecc031c4a9a2"} Nov 24 12:48:58 crc kubenswrapper[4756]: I1124 12:48:58.374667 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" podStartSLOduration=3.374645724 podStartE2EDuration="3.374645724s" podCreationTimestamp="2025-11-24 12:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:48:58.366445447 +0000 UTC m=+1270.723959599" watchObservedRunningTime="2025-11-24 12:48:58.374645724 +0000 UTC m=+1270.732159866" Nov 24 12:49:05 crc kubenswrapper[4756]: I1124 12:49:05.785408 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:49:05 crc kubenswrapper[4756]: I1124 12:49:05.860304 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:49:05 crc kubenswrapper[4756]: I1124 12:49:05.860903 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="dnsmasq-dns" containerID="cri-o://00b7cd347a5bb3d5f47c515fe6e551b5232e897015fc1a0a039991d82c6cc01a" gracePeriod=10 Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.042375 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-dvbcg"] Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.044429 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.055417 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-dvbcg"] Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174216 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174306 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-config\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174361 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174517 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174557 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkjbp\" (UniqueName: \"kubernetes.io/projected/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-kube-api-access-jkjbp\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.174596 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.276904 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-config\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277494 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkjbp\" (UniqueName: \"kubernetes.io/projected/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-kube-api-access-jkjbp\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277523 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.277643 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.278398 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-config\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.278620 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.278791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.279079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.279304 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.279411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.299794 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkjbp\" (UniqueName: \"kubernetes.io/projected/a4fc331b-d9d7-4748-b1ef-2fae03d9b525-kube-api-access-jkjbp\") pod \"dnsmasq-dns-6b6dc74c5-dvbcg\" (UID: \"a4fc331b-d9d7-4748-b1ef-2fae03d9b525\") " pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.365194 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.419617 4756 generic.go:334] "Generic (PLEG): container finished" podID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerID="00b7cd347a5bb3d5f47c515fe6e551b5232e897015fc1a0a039991d82c6cc01a" exitCode=0 Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.419696 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" event={"ID":"05c4f949-e288-4f9b-91d4-1468f79ad265","Type":"ContainerDied","Data":"00b7cd347a5bb3d5f47c515fe6e551b5232e897015fc1a0a039991d82c6cc01a"} Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.419729 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" event={"ID":"05c4f949-e288-4f9b-91d4-1468f79ad265","Type":"ContainerDied","Data":"b4955c28c6fc30dc01d776c82f26aac766a412fed1679ffb8f4b38026114f509"} Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.419772 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4955c28c6fc30dc01d776c82f26aac766a412fed1679ffb8f4b38026114f509" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.462104 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.582785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.583209 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.583306 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.583353 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.583473 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27g2x\" (UniqueName: \"kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.583575 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb\") pod \"05c4f949-e288-4f9b-91d4-1468f79ad265\" (UID: \"05c4f949-e288-4f9b-91d4-1468f79ad265\") " Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.588580 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x" (OuterVolumeSpecName: "kube-api-access-27g2x") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "kube-api-access-27g2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.635654 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.645950 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.650993 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.651041 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.662527 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config" (OuterVolumeSpecName: "config") pod "05c4f949-e288-4f9b-91d4-1468f79ad265" (UID: "05c4f949-e288-4f9b-91d4-1468f79ad265"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686558 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686605 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686622 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686637 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686649 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27g2x\" (UniqueName: \"kubernetes.io/projected/05c4f949-e288-4f9b-91d4-1468f79ad265-kube-api-access-27g2x\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.686660 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05c4f949-e288-4f9b-91d4-1468f79ad265-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:06 crc kubenswrapper[4756]: I1124 12:49:06.896901 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-dvbcg"] Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.430876 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4fc331b-d9d7-4748-b1ef-2fae03d9b525" containerID="7f601e00e32fe08748f6dddd7af29a77cdc385ee2d8a57a3f39c3a6dfb26571d" exitCode=0 Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.431316 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-6npz8" Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.432041 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" event={"ID":"a4fc331b-d9d7-4748-b1ef-2fae03d9b525","Type":"ContainerDied","Data":"7f601e00e32fe08748f6dddd7af29a77cdc385ee2d8a57a3f39c3a6dfb26571d"} Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.432102 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" event={"ID":"a4fc331b-d9d7-4748-b1ef-2fae03d9b525","Type":"ContainerStarted","Data":"e89f4500cf3b42c49500dbc98f2b132ebe836d3544e67b39ef24461c5be7e8c5"} Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.661527 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:49:07 crc kubenswrapper[4756]: I1124 12:49:07.671785 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-6npz8"] Nov 24 12:49:08 crc kubenswrapper[4756]: I1124 12:49:08.444442 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" event={"ID":"a4fc331b-d9d7-4748-b1ef-2fae03d9b525","Type":"ContainerStarted","Data":"fbf86039d62f59a6c9ad1572089a6e5caa8ef84c9b60088d311b0798c2c5934b"} Nov 24 12:49:08 crc kubenswrapper[4756]: I1124 12:49:08.445668 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:08 crc kubenswrapper[4756]: I1124 12:49:08.477538 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" podStartSLOduration=2.477512071 podStartE2EDuration="2.477512071s" podCreationTimestamp="2025-11-24 12:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:49:08.464735903 +0000 UTC m=+1280.822250045" watchObservedRunningTime="2025-11-24 12:49:08.477512071 +0000 UTC m=+1280.835026213" Nov 24 12:49:08 crc kubenswrapper[4756]: I1124 12:49:08.486205 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" path="/var/lib/kubelet/pods/05c4f949-e288-4f9b-91d4-1468f79ad265/volumes" Nov 24 12:49:16 crc kubenswrapper[4756]: I1124 12:49:16.366324 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-dvbcg" Nov 24 12:49:16 crc kubenswrapper[4756]: I1124 12:49:16.446441 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:49:16 crc kubenswrapper[4756]: I1124 12:49:16.446937 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="dnsmasq-dns" containerID="cri-o://f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6" gracePeriod=10 Nov 24 12:49:16 crc kubenswrapper[4756]: I1124 12:49:16.981731 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.023602 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.023722 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7wm\" (UniqueName: \"kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.023761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.023809 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.023956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.024752 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.024841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam\") pod \"284cab86-c5c6-42e4-a83e-7c898ec04933\" (UID: \"284cab86-c5c6-42e4-a83e-7c898ec04933\") " Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.036128 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm" (OuterVolumeSpecName: "kube-api-access-xp7wm") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "kube-api-access-xp7wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.104691 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config" (OuterVolumeSpecName: "config") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.110794 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.111513 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.112741 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.115667 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.127668 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.127874 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-config\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.128006 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.128104 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.128172 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7wm\" (UniqueName: \"kubernetes.io/projected/284cab86-c5c6-42e4-a83e-7c898ec04933-kube-api-access-xp7wm\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.128235 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.132239 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "284cab86-c5c6-42e4-a83e-7c898ec04933" (UID: "284cab86-c5c6-42e4-a83e-7c898ec04933"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.229860 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/284cab86-c5c6-42e4-a83e-7c898ec04933-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.536092 4756 generic.go:334] "Generic (PLEG): container finished" podID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerID="f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6" exitCode=0 Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.536134 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" event={"ID":"284cab86-c5c6-42e4-a83e-7c898ec04933","Type":"ContainerDied","Data":"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6"} Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.536185 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" event={"ID":"284cab86-c5c6-42e4-a83e-7c898ec04933","Type":"ContainerDied","Data":"1d291cd7b55b575e9f8b08257a89aaa183a42f09309556d6a4b5bc07f1daba74"} Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.536208 4756 scope.go:117] "RemoveContainer" containerID="f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.536233 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-mmnqs" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.564190 4756 scope.go:117] "RemoveContainer" containerID="823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.574880 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.583994 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-mmnqs"] Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.636186 4756 scope.go:117] "RemoveContainer" containerID="f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6" Nov 24 12:49:17 crc kubenswrapper[4756]: E1124 12:49:17.636731 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6\": container with ID starting with f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6 not found: ID does not exist" containerID="f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.636775 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6"} err="failed to get container status \"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6\": rpc error: code = NotFound desc = could not find container \"f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6\": container with ID starting with f3a1f9c46384f246320d892d425d14487d93d30b0a053d3b9c4232aa91bbb4b6 not found: ID does not exist" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.636802 4756 scope.go:117] "RemoveContainer" containerID="823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2" Nov 24 12:49:17 crc kubenswrapper[4756]: E1124 12:49:17.637109 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2\": container with ID starting with 823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2 not found: ID does not exist" containerID="823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2" Nov 24 12:49:17 crc kubenswrapper[4756]: I1124 12:49:17.637145 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2"} err="failed to get container status \"823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2\": rpc error: code = NotFound desc = could not find container \"823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2\": container with ID starting with 823071e30616316288315d17794c2a16e98f287eda92cb6563bb1c77e8c837c2 not found: ID does not exist" Nov 24 12:49:18 crc kubenswrapper[4756]: I1124 12:49:18.490544 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" path="/var/lib/kubelet/pods/284cab86-c5c6-42e4-a83e-7c898ec04933/volumes" Nov 24 12:49:29 crc kubenswrapper[4756]: I1124 12:49:29.652681 4756 generic.go:334] "Generic (PLEG): container finished" podID="df531574-9350-4c19-bc09-b95744b731d0" containerID="4476ea27e107d10d1e3100e112dc849cd3e286dbfc91f2f8ffa3ecc031c4a9a2" exitCode=0 Nov 24 12:49:29 crc kubenswrapper[4756]: I1124 12:49:29.652763 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df531574-9350-4c19-bc09-b95744b731d0","Type":"ContainerDied","Data":"4476ea27e107d10d1e3100e112dc849cd3e286dbfc91f2f8ffa3ecc031c4a9a2"} Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.278059 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm"] Nov 24 12:49:30 crc kubenswrapper[4756]: E1124 12:49:30.278975 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.278993 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: E1124 12:49:30.279011 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="init" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.279018 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="init" Nov 24 12:49:30 crc kubenswrapper[4756]: E1124 12:49:30.279042 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.279048 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: E1124 12:49:30.279057 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="init" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.279063 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="init" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.279267 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="284cab86-c5c6-42e4-a83e-7c898ec04933" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.279295 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c4f949-e288-4f9b-91d4-1468f79ad265" containerName="dnsmasq-dns" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.280019 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.281751 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.281914 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.281964 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.282074 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.295972 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm"] Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.405108 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.405243 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrvf\" (UniqueName: \"kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.405343 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.405408 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.506819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.506881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hrvf\" (UniqueName: \"kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.507065 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.507273 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.513551 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.514871 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.516382 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.551588 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hrvf\" (UniqueName: \"kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.605418 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.671379 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df531574-9350-4c19-bc09-b95744b731d0","Type":"ContainerStarted","Data":"2740ab2639034095e56c3cc67b2ff1ead955724ae3679e05557a20700a6ddb58"} Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.671605 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.674276 4756 generic.go:334] "Generic (PLEG): container finished" podID="80c56614-94b5-4a4b-843b-0941f1899ad8" containerID="ca98e1e3bbf1d85e8ec21ccb2c0a791220ddf31fb5c7fc4796ce57300b3bbb95" exitCode=0 Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.674331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c56614-94b5-4a4b-843b-0941f1899ad8","Type":"ContainerDied","Data":"ca98e1e3bbf1d85e8ec21ccb2c0a791220ddf31fb5c7fc4796ce57300b3bbb95"} Nov 24 12:49:30 crc kubenswrapper[4756]: I1124 12:49:30.699967 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.699936269 podStartE2EDuration="36.699936269s" podCreationTimestamp="2025-11-24 12:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:49:30.695088131 +0000 UTC m=+1303.052602273" watchObservedRunningTime="2025-11-24 12:49:30.699936269 +0000 UTC m=+1303.057450411" Nov 24 12:49:31 crc kubenswrapper[4756]: I1124 12:49:31.213992 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm"] Nov 24 12:49:31 crc kubenswrapper[4756]: W1124 12:49:31.214874 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f6dbb8f_7ae0_4132_ac08_12d04b55bb90.slice/crio-2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d WatchSource:0}: Error finding container 2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d: Status 404 returned error can't find the container with id 2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d Nov 24 12:49:31 crc kubenswrapper[4756]: I1124 12:49:31.687134 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c56614-94b5-4a4b-843b-0941f1899ad8","Type":"ContainerStarted","Data":"dc4349deb7b0c0d3468c7e71174bf82652509242de18b0771e6ee7b563580284"} Nov 24 12:49:31 crc kubenswrapper[4756]: I1124 12:49:31.687634 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:49:31 crc kubenswrapper[4756]: I1124 12:49:31.689227 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" event={"ID":"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90","Type":"ContainerStarted","Data":"2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d"} Nov 24 12:49:31 crc kubenswrapper[4756]: I1124 12:49:31.710648 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.710628823 podStartE2EDuration="36.710628823s" podCreationTimestamp="2025-11-24 12:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 12:49:31.706886094 +0000 UTC m=+1304.064400256" watchObservedRunningTime="2025-11-24 12:49:31.710628823 +0000 UTC m=+1304.068142965" Nov 24 12:49:42 crc kubenswrapper[4756]: I1124 12:49:42.802188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" event={"ID":"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90","Type":"ContainerStarted","Data":"1f65ab75aa5dc96b77c34a1385e74d624f8c3e8f3adc3ea64d375b210347e56d"} Nov 24 12:49:44 crc kubenswrapper[4756]: I1124 12:49:44.930456 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 12:49:44 crc kubenswrapper[4756]: I1124 12:49:44.957811 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" podStartSLOduration=4.143934643 podStartE2EDuration="14.9577871s" podCreationTimestamp="2025-11-24 12:49:30 +0000 UTC" firstStartedPulling="2025-11-24 12:49:31.217007576 +0000 UTC m=+1303.574521718" lastFinishedPulling="2025-11-24 12:49:42.030860033 +0000 UTC m=+1314.388374175" observedRunningTime="2025-11-24 12:49:42.83048906 +0000 UTC m=+1315.188003202" watchObservedRunningTime="2025-11-24 12:49:44.9577871 +0000 UTC m=+1317.315301242" Nov 24 12:49:45 crc kubenswrapper[4756]: I1124 12:49:45.967481 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 12:49:56 crc kubenswrapper[4756]: I1124 12:49:56.932900 4756 generic.go:334] "Generic (PLEG): container finished" podID="1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" containerID="1f65ab75aa5dc96b77c34a1385e74d624f8c3e8f3adc3ea64d375b210347e56d" exitCode=0 Nov 24 12:49:56 crc kubenswrapper[4756]: I1124 12:49:56.933485 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" event={"ID":"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90","Type":"ContainerDied","Data":"1f65ab75aa5dc96b77c34a1385e74d624f8c3e8f3adc3ea64d375b210347e56d"} Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.430209 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.562556 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hrvf\" (UniqueName: \"kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf\") pod \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.562721 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle\") pod \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.562759 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key\") pod \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.562843 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory\") pod \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\" (UID: \"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90\") " Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.575718 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf" (OuterVolumeSpecName: "kube-api-access-9hrvf") pod "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" (UID: "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90"). InnerVolumeSpecName "kube-api-access-9hrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.585419 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" (UID: "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.597685 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory" (OuterVolumeSpecName: "inventory") pod "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" (UID: "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.603951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" (UID: "1f6dbb8f-7ae0-4132-ac08-12d04b55bb90"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.665700 4756 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.665736 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.665748 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.665757 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hrvf\" (UniqueName: \"kubernetes.io/projected/1f6dbb8f-7ae0-4132-ac08-12d04b55bb90-kube-api-access-9hrvf\") on node \"crc\" DevicePath \"\"" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.956547 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" event={"ID":"1f6dbb8f-7ae0-4132-ac08-12d04b55bb90","Type":"ContainerDied","Data":"2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d"} Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.956606 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2decb9b6e86aaa7f810871f618f69822ac707525dbfd3f0aaf92310ef4f1926d" Nov 24 12:49:58 crc kubenswrapper[4756]: I1124 12:49:58.956663 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.094801 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v"] Nov 24 12:49:59 crc kubenswrapper[4756]: E1124 12:49:59.095347 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.095367 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.095628 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6dbb8f-7ae0-4132-ac08-12d04b55bb90" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.096446 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.102554 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.102857 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.103030 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.112862 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.129000 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v"] Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.177443 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.177498 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwpxg\" (UniqueName: \"kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.177533 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.281357 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.281413 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwpxg\" (UniqueName: \"kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.281449 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.291061 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.308026 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.327667 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwpxg\" (UniqueName: \"kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-42v8v\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:49:59 crc kubenswrapper[4756]: I1124 12:49:59.467724 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:50:00 crc kubenswrapper[4756]: W1124 12:50:00.025122 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3ef56e_99e5_44c6_8a14_b49385bf3144.slice/crio-28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07 WatchSource:0}: Error finding container 28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07: Status 404 returned error can't find the container with id 28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07 Nov 24 12:50:00 crc kubenswrapper[4756]: I1124 12:50:00.027652 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v"] Nov 24 12:50:00 crc kubenswrapper[4756]: I1124 12:50:00.987495 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" event={"ID":"2b3ef56e-99e5-44c6-8a14-b49385bf3144","Type":"ContainerStarted","Data":"b142a7f8ba4182a45851caaa3116f79a4f5275221578e238ea5ca9ae18352fbb"} Nov 24 12:50:00 crc kubenswrapper[4756]: I1124 12:50:00.987842 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" event={"ID":"2b3ef56e-99e5-44c6-8a14-b49385bf3144","Type":"ContainerStarted","Data":"28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07"} Nov 24 12:50:01 crc kubenswrapper[4756]: I1124 12:50:01.013292 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" podStartSLOduration=1.470087787 podStartE2EDuration="2.013269305s" podCreationTimestamp="2025-11-24 12:49:59 +0000 UTC" firstStartedPulling="2025-11-24 12:50:00.027581974 +0000 UTC m=+1332.385096106" lastFinishedPulling="2025-11-24 12:50:00.570763482 +0000 UTC m=+1332.928277624" observedRunningTime="2025-11-24 12:50:01.010693877 +0000 UTC m=+1333.368208019" watchObservedRunningTime="2025-11-24 12:50:01.013269305 +0000 UTC m=+1333.370783457" Nov 24 12:50:03 crc kubenswrapper[4756]: I1124 12:50:03.479455 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:50:03 crc kubenswrapper[4756]: I1124 12:50:03.479797 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:50:04 crc kubenswrapper[4756]: I1124 12:50:04.017098 4756 generic.go:334] "Generic (PLEG): container finished" podID="2b3ef56e-99e5-44c6-8a14-b49385bf3144" containerID="b142a7f8ba4182a45851caaa3116f79a4f5275221578e238ea5ca9ae18352fbb" exitCode=0 Nov 24 12:50:04 crc kubenswrapper[4756]: I1124 12:50:04.017238 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" event={"ID":"2b3ef56e-99e5-44c6-8a14-b49385bf3144","Type":"ContainerDied","Data":"b142a7f8ba4182a45851caaa3116f79a4f5275221578e238ea5ca9ae18352fbb"} Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.447262 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.504694 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory\") pod \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.504827 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwpxg\" (UniqueName: \"kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg\") pod \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.504857 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key\") pod \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\" (UID: \"2b3ef56e-99e5-44c6-8a14-b49385bf3144\") " Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.515145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg" (OuterVolumeSpecName: "kube-api-access-wwpxg") pod "2b3ef56e-99e5-44c6-8a14-b49385bf3144" (UID: "2b3ef56e-99e5-44c6-8a14-b49385bf3144"). InnerVolumeSpecName "kube-api-access-wwpxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.539563 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b3ef56e-99e5-44c6-8a14-b49385bf3144" (UID: "2b3ef56e-99e5-44c6-8a14-b49385bf3144"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.540885 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory" (OuterVolumeSpecName: "inventory") pod "2b3ef56e-99e5-44c6-8a14-b49385bf3144" (UID: "2b3ef56e-99e5-44c6-8a14-b49385bf3144"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.609026 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.609066 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwpxg\" (UniqueName: \"kubernetes.io/projected/2b3ef56e-99e5-44c6-8a14-b49385bf3144-kube-api-access-wwpxg\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:05 crc kubenswrapper[4756]: I1124 12:50:05.609087 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b3ef56e-99e5-44c6-8a14-b49385bf3144-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.038996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" event={"ID":"2b3ef56e-99e5-44c6-8a14-b49385bf3144","Type":"ContainerDied","Data":"28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07"} Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.039350 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d5831346fafe0663115594ae0906bf0e603b40ece400cf5565e9966562ff07" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.039091 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-42v8v" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.124915 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc"] Nov 24 12:50:06 crc kubenswrapper[4756]: E1124 12:50:06.125366 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ef56e-99e5-44c6-8a14-b49385bf3144" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.125384 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ef56e-99e5-44c6-8a14-b49385bf3144" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.125600 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ef56e-99e5-44c6-8a14-b49385bf3144" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.126297 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.133705 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc"] Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.137189 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.137609 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.137800 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.141820 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.221100 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxm8r\" (UniqueName: \"kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.221404 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.221576 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.221744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.324354 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.324546 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.325014 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxm8r\" (UniqueName: \"kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.325725 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.329943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.331010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.331109 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.342634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxm8r\" (UniqueName: \"kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.442521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:50:06 crc kubenswrapper[4756]: I1124 12:50:06.990892 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc"] Nov 24 12:50:06 crc kubenswrapper[4756]: W1124 12:50:06.996131 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e481796_37f1_413f_8274_2d32d2f3ef5c.slice/crio-b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0 WatchSource:0}: Error finding container b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0: Status 404 returned error can't find the container with id b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0 Nov 24 12:50:07 crc kubenswrapper[4756]: I1124 12:50:07.050834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" event={"ID":"4e481796-37f1-413f-8274-2d32d2f3ef5c","Type":"ContainerStarted","Data":"b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0"} Nov 24 12:50:08 crc kubenswrapper[4756]: I1124 12:50:08.060894 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" event={"ID":"4e481796-37f1-413f-8274-2d32d2f3ef5c","Type":"ContainerStarted","Data":"d5c1075c3b3b0a9b26c7a7bf3025db5e616cbd12a144cd07e42c4404e903071e"} Nov 24 12:50:08 crc kubenswrapper[4756]: I1124 12:50:08.080135 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" podStartSLOduration=1.520633037 podStartE2EDuration="2.080113197s" podCreationTimestamp="2025-11-24 12:50:06 +0000 UTC" firstStartedPulling="2025-11-24 12:50:06.998453675 +0000 UTC m=+1339.355967817" lastFinishedPulling="2025-11-24 12:50:07.557933835 +0000 UTC m=+1339.915447977" observedRunningTime="2025-11-24 12:50:08.076942723 +0000 UTC m=+1340.434456875" watchObservedRunningTime="2025-11-24 12:50:08.080113197 +0000 UTC m=+1340.437627339" Nov 24 12:50:16 crc kubenswrapper[4756]: I1124 12:50:16.583858 4756 scope.go:117] "RemoveContainer" containerID="0928bec15dec4905b6f7eb3b0e83190daa0a2d54a6079c1ad6e8d4bcc66c176a" Nov 24 12:50:16 crc kubenswrapper[4756]: I1124 12:50:16.616636 4756 scope.go:117] "RemoveContainer" containerID="56bed93aed9f1455e0b712cc97d3365469269eedfd0f4167064fd6731f02e557" Nov 24 12:50:33 crc kubenswrapper[4756]: I1124 12:50:33.478802 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:50:33 crc kubenswrapper[4756]: I1124 12:50:33.479456 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.479573 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.480189 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.480229 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.480634 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.480688 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883" gracePeriod=600 Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.679826 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883" exitCode=0 Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.679900 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883"} Nov 24 12:51:03 crc kubenswrapper[4756]: I1124 12:51:03.679982 4756 scope.go:117] "RemoveContainer" containerID="aab2c62b178595e23ab652b4142321a0148fc0017610e7cc4f9bf61e40ae4629" Nov 24 12:51:04 crc kubenswrapper[4756]: I1124 12:51:04.697097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb"} Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.680902 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.683444 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.692514 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.829929 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwsfm\" (UniqueName: \"kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.830248 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.830474 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.932737 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.932808 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.932903 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwsfm\" (UniqueName: \"kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.933577 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.933725 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:10 crc kubenswrapper[4756]: I1124 12:51:10.953963 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwsfm\" (UniqueName: \"kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm\") pod \"certified-operators-85kfw\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:11 crc kubenswrapper[4756]: I1124 12:51:11.021669 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:11 crc kubenswrapper[4756]: I1124 12:51:11.502327 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:11 crc kubenswrapper[4756]: I1124 12:51:11.766704 4756 generic.go:334] "Generic (PLEG): container finished" podID="3d691358-63dd-4f9a-800c-cfe94770690e" containerID="556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf" exitCode=0 Nov 24 12:51:11 crc kubenswrapper[4756]: I1124 12:51:11.766780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerDied","Data":"556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf"} Nov 24 12:51:11 crc kubenswrapper[4756]: I1124 12:51:11.767023 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerStarted","Data":"baa7b25dc55a28b3d946060e84e9a9229578b1566ce54a8aef6bd7160e86b83a"} Nov 24 12:51:12 crc kubenswrapper[4756]: I1124 12:51:12.779051 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerStarted","Data":"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75"} Nov 24 12:51:13 crc kubenswrapper[4756]: I1124 12:51:13.791484 4756 generic.go:334] "Generic (PLEG): container finished" podID="3d691358-63dd-4f9a-800c-cfe94770690e" containerID="ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75" exitCode=0 Nov 24 12:51:13 crc kubenswrapper[4756]: I1124 12:51:13.791665 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerDied","Data":"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75"} Nov 24 12:51:14 crc kubenswrapper[4756]: I1124 12:51:14.805830 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerStarted","Data":"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9"} Nov 24 12:51:14 crc kubenswrapper[4756]: I1124 12:51:14.836110 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85kfw" podStartSLOduration=2.196398338 podStartE2EDuration="4.836083279s" podCreationTimestamp="2025-11-24 12:51:10 +0000 UTC" firstStartedPulling="2025-11-24 12:51:11.769604769 +0000 UTC m=+1404.127118911" lastFinishedPulling="2025-11-24 12:51:14.40928972 +0000 UTC m=+1406.766803852" observedRunningTime="2025-11-24 12:51:14.828208636 +0000 UTC m=+1407.185722798" watchObservedRunningTime="2025-11-24 12:51:14.836083279 +0000 UTC m=+1407.193597431" Nov 24 12:51:16 crc kubenswrapper[4756]: I1124 12:51:16.747651 4756 scope.go:117] "RemoveContainer" containerID="950682b01db04530c0ac8af34323afc47edae40fe59c7b36a210b702de6b4ccf" Nov 24 12:51:21 crc kubenswrapper[4756]: I1124 12:51:21.022693 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:21 crc kubenswrapper[4756]: I1124 12:51:21.023345 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:21 crc kubenswrapper[4756]: I1124 12:51:21.068271 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:21 crc kubenswrapper[4756]: I1124 12:51:21.945026 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:22 crc kubenswrapper[4756]: I1124 12:51:22.017896 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:23 crc kubenswrapper[4756]: I1124 12:51:23.917238 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-85kfw" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="registry-server" containerID="cri-o://22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9" gracePeriod=2 Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.426461 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.522799 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities\") pod \"3d691358-63dd-4f9a-800c-cfe94770690e\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.523206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content\") pod \"3d691358-63dd-4f9a-800c-cfe94770690e\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.523247 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwsfm\" (UniqueName: \"kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm\") pod \"3d691358-63dd-4f9a-800c-cfe94770690e\" (UID: \"3d691358-63dd-4f9a-800c-cfe94770690e\") " Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.523840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities" (OuterVolumeSpecName: "utilities") pod "3d691358-63dd-4f9a-800c-cfe94770690e" (UID: "3d691358-63dd-4f9a-800c-cfe94770690e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.529421 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm" (OuterVolumeSpecName: "kube-api-access-lwsfm") pod "3d691358-63dd-4f9a-800c-cfe94770690e" (UID: "3d691358-63dd-4f9a-800c-cfe94770690e"). InnerVolumeSpecName "kube-api-access-lwsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.588282 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d691358-63dd-4f9a-800c-cfe94770690e" (UID: "3d691358-63dd-4f9a-800c-cfe94770690e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.625865 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.625920 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwsfm\" (UniqueName: \"kubernetes.io/projected/3d691358-63dd-4f9a-800c-cfe94770690e-kube-api-access-lwsfm\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.625938 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d691358-63dd-4f9a-800c-cfe94770690e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.931007 4756 generic.go:334] "Generic (PLEG): container finished" podID="3d691358-63dd-4f9a-800c-cfe94770690e" containerID="22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9" exitCode=0 Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.931061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerDied","Data":"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9"} Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.931103 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85kfw" event={"ID":"3d691358-63dd-4f9a-800c-cfe94770690e","Type":"ContainerDied","Data":"baa7b25dc55a28b3d946060e84e9a9229578b1566ce54a8aef6bd7160e86b83a"} Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.931126 4756 scope.go:117] "RemoveContainer" containerID="22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.931211 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85kfw" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.969717 4756 scope.go:117] "RemoveContainer" containerID="ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75" Nov 24 12:51:24 crc kubenswrapper[4756]: I1124 12:51:24.989214 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.003467 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-85kfw"] Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.013423 4756 scope.go:117] "RemoveContainer" containerID="556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.053805 4756 scope.go:117] "RemoveContainer" containerID="22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9" Nov 24 12:51:25 crc kubenswrapper[4756]: E1124 12:51:25.054759 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9\": container with ID starting with 22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9 not found: ID does not exist" containerID="22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.054805 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9"} err="failed to get container status \"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9\": rpc error: code = NotFound desc = could not find container \"22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9\": container with ID starting with 22d6722f8649632b0d3e3e8efe968485904daf728824383f957db8d5e19543c9 not found: ID does not exist" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.054833 4756 scope.go:117] "RemoveContainer" containerID="ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75" Nov 24 12:51:25 crc kubenswrapper[4756]: E1124 12:51:25.055406 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75\": container with ID starting with ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75 not found: ID does not exist" containerID="ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.055483 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75"} err="failed to get container status \"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75\": rpc error: code = NotFound desc = could not find container \"ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75\": container with ID starting with ffc92902e31513799c6870712f77709496d4874474c5dcb3ce6651c565d1bd75 not found: ID does not exist" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.055533 4756 scope.go:117] "RemoveContainer" containerID="556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf" Nov 24 12:51:25 crc kubenswrapper[4756]: E1124 12:51:25.057958 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf\": container with ID starting with 556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf not found: ID does not exist" containerID="556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf" Nov 24 12:51:25 crc kubenswrapper[4756]: I1124 12:51:25.058439 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf"} err="failed to get container status \"556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf\": rpc error: code = NotFound desc = could not find container \"556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf\": container with ID starting with 556855d65a77d13ee9a2d92606eb326162ee97ada1cb3e4fb20199884fb1f0bf not found: ID does not exist" Nov 24 12:51:26 crc kubenswrapper[4756]: I1124 12:51:26.492242 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" path="/var/lib/kubelet/pods/3d691358-63dd-4f9a-800c-cfe94770690e/volumes" Nov 24 12:53:03 crc kubenswrapper[4756]: I1124 12:53:03.478791 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:53:03 crc kubenswrapper[4756]: I1124 12:53:03.479405 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.066372 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-74cp2"] Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.082212 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-6e16-account-create-66mlm"] Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.093617 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-74cp2"] Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.107380 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-6e16-account-create-66mlm"] Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.493551 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbceb03-940a-4f42-8252-7d4f5ee1b4d2" path="/var/lib/kubelet/pods/0bbceb03-940a-4f42-8252-7d4f5ee1b4d2/volumes" Nov 24 12:53:06 crc kubenswrapper[4756]: I1124 12:53:06.497829 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b941b63-700f-407d-b545-71c5b1e54f2c" path="/var/lib/kubelet/pods/3b941b63-700f-407d-b545-71c5b1e54f2c/volumes" Nov 24 12:53:16 crc kubenswrapper[4756]: I1124 12:53:16.894629 4756 scope.go:117] "RemoveContainer" containerID="0943e2249df9895df437f12d282b14a27f5f138d2cead318cfcf7dd916ccf6bc" Nov 24 12:53:16 crc kubenswrapper[4756]: I1124 12:53:16.929825 4756 scope.go:117] "RemoveContainer" containerID="552ffd6d6bad8035256d2cb371ce468e8538329f075cf3fe3591c0020a7e92c7" Nov 24 12:53:19 crc kubenswrapper[4756]: I1124 12:53:19.075049 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0b9-account-create-9q7xc"] Nov 24 12:53:19 crc kubenswrapper[4756]: I1124 12:53:19.119320 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a0b9-account-create-9q7xc"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.036604 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a2de-account-create-jjkvc"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.045784 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dpjcl"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.056895 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7c0e-account-create-xsh7k"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.065427 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7c0e-account-create-xsh7k"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.073962 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a2de-account-create-jjkvc"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.082826 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dpjcl"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.091662 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2kfkz"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.099724 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rjhrm"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.107299 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2kfkz"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.115555 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rjhrm"] Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.488797 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c19843e-ef57-4ca4-bc56-992f31cc5a87" path="/var/lib/kubelet/pods/4c19843e-ef57-4ca4-bc56-992f31cc5a87/volumes" Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.494590 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d8e682-5102-429f-8b52-c8c962ef8ebd" path="/var/lib/kubelet/pods/84d8e682-5102-429f-8b52-c8c962ef8ebd/volumes" Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.496222 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b08308-274c-46b2-a129-568fc7acc250" path="/var/lib/kubelet/pods/b1b08308-274c-46b2-a129-568fc7acc250/volumes" Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.499313 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7050330-b07e-4f5a-9fca-3ad560a9cb19" path="/var/lib/kubelet/pods/b7050330-b07e-4f5a-9fca-3ad560a9cb19/volumes" Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.499996 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de63eecd-9c64-459c-a274-a8bfc7362544" path="/var/lib/kubelet/pods/de63eecd-9c64-459c-a274-a8bfc7362544/volumes" Nov 24 12:53:20 crc kubenswrapper[4756]: I1124 12:53:20.501881 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5662dd7-e194-4cff-8d36-a51bd442adc9" path="/var/lib/kubelet/pods/e5662dd7-e194-4cff-8d36-a51bd442adc9/volumes" Nov 24 12:53:29 crc kubenswrapper[4756]: E1124 12:53:29.576898 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e481796_37f1_413f_8274_2d32d2f3ef5c.slice/crio-conmon-d5c1075c3b3b0a9b26c7a7bf3025db5e616cbd12a144cd07e42c4404e903071e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e481796_37f1_413f_8274_2d32d2f3ef5c.slice/crio-d5c1075c3b3b0a9b26c7a7bf3025db5e616cbd12a144cd07e42c4404e903071e.scope\": RecentStats: unable to find data in memory cache]" Nov 24 12:53:30 crc kubenswrapper[4756]: I1124 12:53:30.269608 4756 generic.go:334] "Generic (PLEG): container finished" podID="4e481796-37f1-413f-8274-2d32d2f3ef5c" containerID="d5c1075c3b3b0a9b26c7a7bf3025db5e616cbd12a144cd07e42c4404e903071e" exitCode=0 Nov 24 12:53:30 crc kubenswrapper[4756]: I1124 12:53:30.269694 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" event={"ID":"4e481796-37f1-413f-8274-2d32d2f3ef5c","Type":"ContainerDied","Data":"d5c1075c3b3b0a9b26c7a7bf3025db5e616cbd12a144cd07e42c4404e903071e"} Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.712720 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.842369 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle\") pod \"4e481796-37f1-413f-8274-2d32d2f3ef5c\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.842428 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory\") pod \"4e481796-37f1-413f-8274-2d32d2f3ef5c\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.842556 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxm8r\" (UniqueName: \"kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r\") pod \"4e481796-37f1-413f-8274-2d32d2f3ef5c\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.842592 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key\") pod \"4e481796-37f1-413f-8274-2d32d2f3ef5c\" (UID: \"4e481796-37f1-413f-8274-2d32d2f3ef5c\") " Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.847520 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4e481796-37f1-413f-8274-2d32d2f3ef5c" (UID: "4e481796-37f1-413f-8274-2d32d2f3ef5c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.847947 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r" (OuterVolumeSpecName: "kube-api-access-wxm8r") pod "4e481796-37f1-413f-8274-2d32d2f3ef5c" (UID: "4e481796-37f1-413f-8274-2d32d2f3ef5c"). InnerVolumeSpecName "kube-api-access-wxm8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.878422 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory" (OuterVolumeSpecName: "inventory") pod "4e481796-37f1-413f-8274-2d32d2f3ef5c" (UID: "4e481796-37f1-413f-8274-2d32d2f3ef5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.882684 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e481796-37f1-413f-8274-2d32d2f3ef5c" (UID: "4e481796-37f1-413f-8274-2d32d2f3ef5c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.946030 4756 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.946421 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.946541 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxm8r\" (UniqueName: \"kubernetes.io/projected/4e481796-37f1-413f-8274-2d32d2f3ef5c-kube-api-access-wxm8r\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:31 crc kubenswrapper[4756]: I1124 12:53:31.946650 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e481796-37f1-413f-8274-2d32d2f3ef5c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.290898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" event={"ID":"4e481796-37f1-413f-8274-2d32d2f3ef5c","Type":"ContainerDied","Data":"b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0"} Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.290954 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.290964 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86d579788c2b67253fb42205e41f887188433f170e6876923993a849b97f7a0" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.390567 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh"] Nov 24 12:53:32 crc kubenswrapper[4756]: E1124 12:53:32.390973 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e481796-37f1-413f-8274-2d32d2f3ef5c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.390994 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e481796-37f1-413f-8274-2d32d2f3ef5c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:53:32 crc kubenswrapper[4756]: E1124 12:53:32.391013 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="extract-content" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.391020 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="extract-content" Nov 24 12:53:32 crc kubenswrapper[4756]: E1124 12:53:32.391042 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="registry-server" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.391051 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="registry-server" Nov 24 12:53:32 crc kubenswrapper[4756]: E1124 12:53:32.391097 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="extract-utilities" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.391106 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="extract-utilities" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.391358 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d691358-63dd-4f9a-800c-cfe94770690e" containerName="registry-server" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.391390 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e481796-37f1-413f-8274-2d32d2f3ef5c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.392256 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.397797 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.398575 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.398647 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.399434 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.405366 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh"] Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.571997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.572185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.572424 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn9j\" (UniqueName: \"kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.674268 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.674633 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.674686 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn9j\" (UniqueName: \"kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.678294 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.681980 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.692803 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn9j\" (UniqueName: \"kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:32 crc kubenswrapper[4756]: I1124 12:53:32.722118 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:53:33 crc kubenswrapper[4756]: I1124 12:53:33.375315 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh"] Nov 24 12:53:33 crc kubenswrapper[4756]: I1124 12:53:33.385076 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:53:33 crc kubenswrapper[4756]: I1124 12:53:33.479637 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:53:33 crc kubenswrapper[4756]: I1124 12:53:33.479731 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:53:34 crc kubenswrapper[4756]: I1124 12:53:34.336574 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" event={"ID":"26be1a13-f657-4240-ba64-a260d9a6355a","Type":"ContainerStarted","Data":"697de36d702f8461d48290c0ad16b0f44eae9fffad7b7442c63e5adf1bc819b8"} Nov 24 12:53:34 crc kubenswrapper[4756]: I1124 12:53:34.336933 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" event={"ID":"26be1a13-f657-4240-ba64-a260d9a6355a","Type":"ContainerStarted","Data":"3f0248763ef270042d3759e7d40318874511787baaafc68ae1987c461dcda44e"} Nov 24 12:53:34 crc kubenswrapper[4756]: I1124 12:53:34.362321 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" podStartSLOduration=1.8221816450000001 podStartE2EDuration="2.362298181s" podCreationTimestamp="2025-11-24 12:53:32 +0000 UTC" firstStartedPulling="2025-11-24 12:53:33.384759658 +0000 UTC m=+1545.742273800" lastFinishedPulling="2025-11-24 12:53:33.924876184 +0000 UTC m=+1546.282390336" observedRunningTime="2025-11-24 12:53:34.353663908 +0000 UTC m=+1546.711178080" watchObservedRunningTime="2025-11-24 12:53:34.362298181 +0000 UTC m=+1546.719812323" Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.060697 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-22a1-account-create-w8fss"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.073395 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6ppv6"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.082444 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-22a1-account-create-w8fss"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.091198 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6x89j"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.142943 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6x89j"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.151461 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6ppv6"] Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.485886 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2538e4c7-1c70-4919-ba63-e24b6e1fbca0" path="/var/lib/kubelet/pods/2538e4c7-1c70-4919-ba63-e24b6e1fbca0/volumes" Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.488270 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2" path="/var/lib/kubelet/pods/b1d330ff-4cea-4cd1-bd42-a7e5e2a7dbf2/volumes" Nov 24 12:53:54 crc kubenswrapper[4756]: I1124 12:53:54.488909 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba67f4d-09b1-4ef4-b159-c2dad51b1050" path="/var/lib/kubelet/pods/cba67f4d-09b1-4ef4-b159-c2dad51b1050/volumes" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.479475 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.480114 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.480159 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.480672 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.480728 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" gracePeriod=600 Nov 24 12:54:03 crc kubenswrapper[4756]: E1124 12:54:03.619259 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.703300 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" exitCode=0 Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.703352 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb"} Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.703394 4756 scope.go:117] "RemoveContainer" containerID="ac456773527ad616724bb83ec4d86cebb123ce3812e319a053d93a0cd5386883" Nov 24 12:54:03 crc kubenswrapper[4756]: I1124 12:54:03.704105 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:54:03 crc kubenswrapper[4756]: E1124 12:54:03.704497 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.056807 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-66cf-account-create-gl79r"] Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.068358 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g4t66"] Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.077705 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8016-account-create-4ltbv"] Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.088766 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g4t66"] Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.097348 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8016-account-create-4ltbv"] Nov 24 12:54:09 crc kubenswrapper[4756]: I1124 12:54:09.105024 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-66cf-account-create-gl79r"] Nov 24 12:54:10 crc kubenswrapper[4756]: I1124 12:54:10.488617 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b34a20-5375-4e57-9934-848828dee2bf" path="/var/lib/kubelet/pods/32b34a20-5375-4e57-9934-848828dee2bf/volumes" Nov 24 12:54:10 crc kubenswrapper[4756]: I1124 12:54:10.491594 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b1e6f3-dbff-4e44-9900-80796af14d00" path="/var/lib/kubelet/pods/43b1e6f3-dbff-4e44-9900-80796af14d00/volumes" Nov 24 12:54:10 crc kubenswrapper[4756]: I1124 12:54:10.492885 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb7bce2-6bda-4c2b-b661-e4f52c9e2046" path="/var/lib/kubelet/pods/7fb7bce2-6bda-4c2b-b661-e4f52c9e2046/volumes" Nov 24 12:54:16 crc kubenswrapper[4756]: I1124 12:54:16.475770 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:54:16 crc kubenswrapper[4756]: E1124 12:54:16.476510 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.055626 4756 scope.go:117] "RemoveContainer" containerID="bb0583a4437edd20709ebf344d2a09284117dbcad506d9e2c9aac78c53243a2f" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.164940 4756 scope.go:117] "RemoveContainer" containerID="f63679fc5389a2d3c3c47ffe215b72526b85106d4f3db6cda64d71396b13b728" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.224087 4756 scope.go:117] "RemoveContainer" containerID="80fb22fcba05b490951fdd353a8a5c70589aa4dc6ef60a3d5e660b94a566e63c" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.259273 4756 scope.go:117] "RemoveContainer" containerID="72b2a90874c2bc391f3b2f2a17c67902e654ae2248389e401dad34080fd3da4b" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.318022 4756 scope.go:117] "RemoveContainer" containerID="8b316be604a858d8e18fa501fca5eed0ec24696060c128b40004bbbd255ee134" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.371035 4756 scope.go:117] "RemoveContainer" containerID="39c533b88456aa917835fbf971f968dcefd6361f4a1dbb4218c52e9f3aaa203f" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.420974 4756 scope.go:117] "RemoveContainer" containerID="5b82b9892f191b666d8b836a948eba8f900b7213faa6b85c296d5b8c66e7baae" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.465651 4756 scope.go:117] "RemoveContainer" containerID="7dfee5e677531be7f1546bf1d78731a8920e8d16e0013bbd784cb115e33ccc9e" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.504526 4756 scope.go:117] "RemoveContainer" containerID="0d26ab712bb61a3a4f5a9969eba1cf61ae537fe98ea5b2aa65087d9c8af1e9f3" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.525984 4756 scope.go:117] "RemoveContainer" containerID="00b7cd347a5bb3d5f47c515fe6e551b5232e897015fc1a0a039991d82c6cc01a" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.548757 4756 scope.go:117] "RemoveContainer" containerID="c3a5881c0d93497bf985fca293eeb5fe4d235f05cfec96c9b11c0396ab0ef0b7" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.633316 4756 scope.go:117] "RemoveContainer" containerID="d6e635291fb4952799c94031342ad1e35f4ba9412187c6a0ee98486367d43249" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.663865 4756 scope.go:117] "RemoveContainer" containerID="9e54559d6a49c7647e7ac69894872620eb183ff6fef4f7b6c30c56dbec1d8446" Nov 24 12:54:17 crc kubenswrapper[4756]: I1124 12:54:17.715577 4756 scope.go:117] "RemoveContainer" containerID="f54a8507680690dab2940467b788218d574a40a87b42ab5a070716690d524a57" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.709410 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.711933 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.733656 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.803534 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.803832 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfddd\" (UniqueName: \"kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.804007 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.906718 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.906841 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.906933 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfddd\" (UniqueName: \"kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.907393 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.907701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.909198 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.924925 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.925067 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:18 crc kubenswrapper[4756]: I1124 12:54:18.934869 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfddd\" (UniqueName: \"kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd\") pod \"community-operators-5n297\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.008983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gp9q\" (UniqueName: \"kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.009215 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.009366 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.042231 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.112695 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.113142 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.113575 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.113612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.113727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gp9q\" (UniqueName: \"kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.136111 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gp9q\" (UniqueName: \"kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q\") pod \"redhat-marketplace-fx8d5\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.288800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.578466 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.761891 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:19 crc kubenswrapper[4756]: W1124 12:54:19.767408 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb6d572_e690_4143_aeff_982f9371c75d.slice/crio-e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b WatchSource:0}: Error finding container e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b: Status 404 returned error can't find the container with id e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.934540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerStarted","Data":"e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b"} Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.937363 4756 generic.go:334] "Generic (PLEG): container finished" podID="7edec971-1b8d-498b-a964-4a90bf59a403" containerID="9e6cd67108a663eb5596901c05bb81f4b720c4ab766c434c12a0a8dee4f9a1ee" exitCode=0 Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.937402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerDied","Data":"9e6cd67108a663eb5596901c05bb81f4b720c4ab766c434c12a0a8dee4f9a1ee"} Nov 24 12:54:19 crc kubenswrapper[4756]: I1124 12:54:19.937423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerStarted","Data":"400b3f76bbcdfbb3f8b7d6878fccd080aa2407140712cc21f0d0eaf0bdf25877"} Nov 24 12:54:20 crc kubenswrapper[4756]: I1124 12:54:20.046568 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tflqn"] Nov 24 12:54:20 crc kubenswrapper[4756]: I1124 12:54:20.063117 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tflqn"] Nov 24 12:54:20 crc kubenswrapper[4756]: I1124 12:54:20.496645 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79cae2a4-d229-4c70-b19f-b9016c530697" path="/var/lib/kubelet/pods/79cae2a4-d229-4c70-b19f-b9016c530697/volumes" Nov 24 12:54:20 crc kubenswrapper[4756]: I1124 12:54:20.953586 4756 generic.go:334] "Generic (PLEG): container finished" podID="efb6d572-e690-4143-aeff-982f9371c75d" containerID="69d39de08aff4da3996a76a5245009586e21117d95af784474f1bc31a3b74ce9" exitCode=0 Nov 24 12:54:20 crc kubenswrapper[4756]: I1124 12:54:20.953684 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerDied","Data":"69d39de08aff4da3996a76a5245009586e21117d95af784474f1bc31a3b74ce9"} Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.518089 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.523056 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.528471 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.698025 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.698522 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94b8\" (UniqueName: \"kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.698578 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.800944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94b8\" (UniqueName: \"kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.801344 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.801456 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.802021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.802027 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.824899 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94b8\" (UniqueName: \"kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8\") pod \"redhat-operators-z5jd9\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.850611 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.986750 4756 generic.go:334] "Generic (PLEG): container finished" podID="efb6d572-e690-4143-aeff-982f9371c75d" containerID="115c82c4b4d20b4d8b11d321a362d252a647c72804063dcd674e950a1ea6ddc1" exitCode=0 Nov 24 12:54:22 crc kubenswrapper[4756]: I1124 12:54:22.986821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerDied","Data":"115c82c4b4d20b4d8b11d321a362d252a647c72804063dcd674e950a1ea6ddc1"} Nov 24 12:54:24 crc kubenswrapper[4756]: I1124 12:54:24.860234 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:25 crc kubenswrapper[4756]: I1124 12:54:25.016781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerStarted","Data":"1915e24cd90f33d9bd11cadcc92169790307c12986c1f1917778eec0d0906f5b"} Nov 24 12:54:25 crc kubenswrapper[4756]: I1124 12:54:25.019027 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerStarted","Data":"6a2b6d56e528e0a8a20de80e750864b6fcb6332f635cad2c5923b51da3a5f423"} Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.032530 4756 generic.go:334] "Generic (PLEG): container finished" podID="7edec971-1b8d-498b-a964-4a90bf59a403" containerID="6a2b6d56e528e0a8a20de80e750864b6fcb6332f635cad2c5923b51da3a5f423" exitCode=0 Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.032634 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerDied","Data":"6a2b6d56e528e0a8a20de80e750864b6fcb6332f635cad2c5923b51da3a5f423"} Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.047523 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x592j"] Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.048732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerStarted","Data":"294027209fb20536ae73768c3193cf44816a76f3d188e99fef4b1000b99dda5c"} Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.057308 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x592j"] Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.059560 4756 generic.go:334] "Generic (PLEG): container finished" podID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerID="3dded7fc89d97ce26b16a9cbfc899ab7d319dd0901aa3de3be94066546573c40" exitCode=0 Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.059615 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerDied","Data":"3dded7fc89d97ce26b16a9cbfc899ab7d319dd0901aa3de3be94066546573c40"} Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.108031 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fx8d5" podStartSLOduration=4.342110974 podStartE2EDuration="8.108010151s" podCreationTimestamp="2025-11-24 12:54:18 +0000 UTC" firstStartedPulling="2025-11-24 12:54:20.957703791 +0000 UTC m=+1593.315217933" lastFinishedPulling="2025-11-24 12:54:24.723602968 +0000 UTC m=+1597.081117110" observedRunningTime="2025-11-24 12:54:26.102288546 +0000 UTC m=+1598.459802698" watchObservedRunningTime="2025-11-24 12:54:26.108010151 +0000 UTC m=+1598.465524293" Nov 24 12:54:26 crc kubenswrapper[4756]: I1124 12:54:26.503408 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519ea567-18c2-49a6-8e45-ad4bb39ecd90" path="/var/lib/kubelet/pods/519ea567-18c2-49a6-8e45-ad4bb39ecd90/volumes" Nov 24 12:54:27 crc kubenswrapper[4756]: I1124 12:54:27.071862 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerStarted","Data":"47feca8e2a99e21abcf04d4465c90ec29f6da3f3d85b7494b927b25552e1c948"} Nov 24 12:54:27 crc kubenswrapper[4756]: I1124 12:54:27.073810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerStarted","Data":"0dff66685d304cb3b8d5f0b5d9c31cbecd364a374eb94a994ea3529d0b75d40b"} Nov 24 12:54:27 crc kubenswrapper[4756]: I1124 12:54:27.092875 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5n297" podStartSLOduration=2.579353431 podStartE2EDuration="9.092857621s" podCreationTimestamp="2025-11-24 12:54:18 +0000 UTC" firstStartedPulling="2025-11-24 12:54:19.938987953 +0000 UTC m=+1592.296502095" lastFinishedPulling="2025-11-24 12:54:26.452492143 +0000 UTC m=+1598.810006285" observedRunningTime="2025-11-24 12:54:27.086672424 +0000 UTC m=+1599.444186576" watchObservedRunningTime="2025-11-24 12:54:27.092857621 +0000 UTC m=+1599.450371773" Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.042876 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.043347 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.094805 4756 generic.go:334] "Generic (PLEG): container finished" podID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerID="0dff66685d304cb3b8d5f0b5d9c31cbecd364a374eb94a994ea3529d0b75d40b" exitCode=0 Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.094849 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerDied","Data":"0dff66685d304cb3b8d5f0b5d9c31cbecd364a374eb94a994ea3529d0b75d40b"} Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.110554 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.289715 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:29 crc kubenswrapper[4756]: I1124 12:54:29.291473 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:30 crc kubenswrapper[4756]: I1124 12:54:30.105987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerStarted","Data":"6f20eacbc7e844ab2511bfa33acade7b4ad3f92a1a8f69d2bb8507700380e125"} Nov 24 12:54:30 crc kubenswrapper[4756]: I1124 12:54:30.123660 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5jd9" podStartSLOduration=4.698020738 podStartE2EDuration="8.123639307s" podCreationTimestamp="2025-11-24 12:54:22 +0000 UTC" firstStartedPulling="2025-11-24 12:54:26.061280157 +0000 UTC m=+1598.418794299" lastFinishedPulling="2025-11-24 12:54:29.486898726 +0000 UTC m=+1601.844412868" observedRunningTime="2025-11-24 12:54:30.121630112 +0000 UTC m=+1602.479144274" watchObservedRunningTime="2025-11-24 12:54:30.123639307 +0000 UTC m=+1602.481153449" Nov 24 12:54:30 crc kubenswrapper[4756]: I1124 12:54:30.348416 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fx8d5" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="registry-server" probeResult="failure" output=< Nov 24 12:54:30 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:54:30 crc kubenswrapper[4756]: > Nov 24 12:54:31 crc kubenswrapper[4756]: I1124 12:54:31.475744 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:54:31 crc kubenswrapper[4756]: E1124 12:54:31.476073 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:54:32 crc kubenswrapper[4756]: I1124 12:54:32.044639 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-fm4f8"] Nov 24 12:54:32 crc kubenswrapper[4756]: I1124 12:54:32.053459 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-fm4f8"] Nov 24 12:54:32 crc kubenswrapper[4756]: I1124 12:54:32.492796 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73042cf1-c8fa-417b-b688-cfed5a034a8b" path="/var/lib/kubelet/pods/73042cf1-c8fa-417b-b688-cfed5a034a8b/volumes" Nov 24 12:54:32 crc kubenswrapper[4756]: I1124 12:54:32.851055 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:32 crc kubenswrapper[4756]: I1124 12:54:32.851097 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:33 crc kubenswrapper[4756]: I1124 12:54:33.899460 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5jd9" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" probeResult="failure" output=< Nov 24 12:54:33 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:54:33 crc kubenswrapper[4756]: > Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.101456 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5n297" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.242365 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.283316 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.290221 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kgmg" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="registry-server" containerID="cri-o://a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" gracePeriod=2 Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.363174 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.447443 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:39 crc kubenswrapper[4756]: E1124 12:54:39.784369 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b is running failed: container process not found" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:54:39 crc kubenswrapper[4756]: E1124 12:54:39.785572 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b is running failed: container process not found" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:54:39 crc kubenswrapper[4756]: E1124 12:54:39.786107 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b is running failed: container process not found" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 12:54:39 crc kubenswrapper[4756]: E1124 12:54:39.786148 4756 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-5kgmg" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="registry-server" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.845042 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.884102 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2zdp\" (UniqueName: \"kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp\") pod \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.884171 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content\") pod \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.884263 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities\") pod \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\" (UID: \"a32ab267-a3aa-4fa5-80e5-ebbe78465af3\") " Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.886766 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities" (OuterVolumeSpecName: "utilities") pod "a32ab267-a3aa-4fa5-80e5-ebbe78465af3" (UID: "a32ab267-a3aa-4fa5-80e5-ebbe78465af3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.901639 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp" (OuterVolumeSpecName: "kube-api-access-s2zdp") pod "a32ab267-a3aa-4fa5-80e5-ebbe78465af3" (UID: "a32ab267-a3aa-4fa5-80e5-ebbe78465af3"). InnerVolumeSpecName "kube-api-access-s2zdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.986275 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2zdp\" (UniqueName: \"kubernetes.io/projected/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-kube-api-access-s2zdp\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:39 crc kubenswrapper[4756]: I1124 12:54:39.986311 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.006623 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a32ab267-a3aa-4fa5-80e5-ebbe78465af3" (UID: "a32ab267-a3aa-4fa5-80e5-ebbe78465af3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.088570 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32ab267-a3aa-4fa5-80e5-ebbe78465af3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.216045 4756 generic.go:334] "Generic (PLEG): container finished" podID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" exitCode=0 Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.217318 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kgmg" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.217810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerDied","Data":"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b"} Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.217849 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kgmg" event={"ID":"a32ab267-a3aa-4fa5-80e5-ebbe78465af3","Type":"ContainerDied","Data":"4c0ddf494150de85fd2e881ed06f7ae68957b67ef5961af825d7d7ea19c94ceb"} Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.217873 4756 scope.go:117] "RemoveContainer" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.264868 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.279928 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kgmg"] Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.281193 4756 scope.go:117] "RemoveContainer" containerID="ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.348820 4756 scope.go:117] "RemoveContainer" containerID="a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.370989 4756 scope.go:117] "RemoveContainer" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" Nov 24 12:54:40 crc kubenswrapper[4756]: E1124 12:54:40.371384 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b\": container with ID starting with a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b not found: ID does not exist" containerID="a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.371429 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b"} err="failed to get container status \"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b\": rpc error: code = NotFound desc = could not find container \"a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b\": container with ID starting with a7a72769972200a2caa4c2154f02c5ae114ae6eb4b265374c56bc399ad451c5b not found: ID does not exist" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.371469 4756 scope.go:117] "RemoveContainer" containerID="ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6" Nov 24 12:54:40 crc kubenswrapper[4756]: E1124 12:54:40.372192 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6\": container with ID starting with ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6 not found: ID does not exist" containerID="ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.372264 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6"} err="failed to get container status \"ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6\": rpc error: code = NotFound desc = could not find container \"ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6\": container with ID starting with ddd6473ccf98bb334ca3c8ff90d1690a9c0f0964a36870cd2a5109557ec619a6 not found: ID does not exist" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.372364 4756 scope.go:117] "RemoveContainer" containerID="a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb" Nov 24 12:54:40 crc kubenswrapper[4756]: E1124 12:54:40.373046 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb\": container with ID starting with a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb not found: ID does not exist" containerID="a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.373076 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb"} err="failed to get container status \"a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb\": rpc error: code = NotFound desc = could not find container \"a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb\": container with ID starting with a0c494e446b9dd3ad2e39d6907f5dcbfdebdff5aac25181b29d46b5ee66d29bb not found: ID does not exist" Nov 24 12:54:40 crc kubenswrapper[4756]: I1124 12:54:40.526119 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" path="/var/lib/kubelet/pods/a32ab267-a3aa-4fa5-80e5-ebbe78465af3/volumes" Nov 24 12:54:41 crc kubenswrapper[4756]: I1124 12:54:41.786184 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:41 crc kubenswrapper[4756]: I1124 12:54:41.786738 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fx8d5" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="registry-server" containerID="cri-o://294027209fb20536ae73768c3193cf44816a76f3d188e99fef4b1000b99dda5c" gracePeriod=2 Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.240891 4756 generic.go:334] "Generic (PLEG): container finished" podID="efb6d572-e690-4143-aeff-982f9371c75d" containerID="294027209fb20536ae73768c3193cf44816a76f3d188e99fef4b1000b99dda5c" exitCode=0 Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.240916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerDied","Data":"294027209fb20536ae73768c3193cf44816a76f3d188e99fef4b1000b99dda5c"} Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.241305 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx8d5" event={"ID":"efb6d572-e690-4143-aeff-982f9371c75d","Type":"ContainerDied","Data":"e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b"} Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.241327 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f0804d9d1fd5d54ff8e4de11c623a29cc50b01525fe48bad1035f56379ab3b" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.312146 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.328264 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content\") pod \"efb6d572-e690-4143-aeff-982f9371c75d\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.328356 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gp9q\" (UniqueName: \"kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q\") pod \"efb6d572-e690-4143-aeff-982f9371c75d\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.328417 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities\") pod \"efb6d572-e690-4143-aeff-982f9371c75d\" (UID: \"efb6d572-e690-4143-aeff-982f9371c75d\") " Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.329340 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities" (OuterVolumeSpecName: "utilities") pod "efb6d572-e690-4143-aeff-982f9371c75d" (UID: "efb6d572-e690-4143-aeff-982f9371c75d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.334559 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q" (OuterVolumeSpecName: "kube-api-access-8gp9q") pod "efb6d572-e690-4143-aeff-982f9371c75d" (UID: "efb6d572-e690-4143-aeff-982f9371c75d"). InnerVolumeSpecName "kube-api-access-8gp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.348740 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efb6d572-e690-4143-aeff-982f9371c75d" (UID: "efb6d572-e690-4143-aeff-982f9371c75d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.429976 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.430015 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb6d572-e690-4143-aeff-982f9371c75d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:42 crc kubenswrapper[4756]: I1124 12:54:42.430029 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gp9q\" (UniqueName: \"kubernetes.io/projected/efb6d572-e690-4143-aeff-982f9371c75d-kube-api-access-8gp9q\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:43 crc kubenswrapper[4756]: I1124 12:54:43.251104 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx8d5" Nov 24 12:54:43 crc kubenswrapper[4756]: I1124 12:54:43.277031 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:43 crc kubenswrapper[4756]: I1124 12:54:43.287377 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx8d5"] Nov 24 12:54:43 crc kubenswrapper[4756]: I1124 12:54:43.900285 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5jd9" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" probeResult="failure" output=< Nov 24 12:54:43 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 12:54:43 crc kubenswrapper[4756]: > Nov 24 12:54:44 crc kubenswrapper[4756]: I1124 12:54:44.488240 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb6d572-e690-4143-aeff-982f9371c75d" path="/var/lib/kubelet/pods/efb6d572-e690-4143-aeff-982f9371c75d/volumes" Nov 24 12:54:46 crc kubenswrapper[4756]: I1124 12:54:46.475511 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:54:46 crc kubenswrapper[4756]: E1124 12:54:46.476007 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:54:52 crc kubenswrapper[4756]: I1124 12:54:52.906652 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:52 crc kubenswrapper[4756]: I1124 12:54:52.968360 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:56 crc kubenswrapper[4756]: I1124 12:54:56.357278 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:56 crc kubenswrapper[4756]: I1124 12:54:56.358027 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5jd9" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" containerID="cri-o://6f20eacbc7e844ab2511bfa33acade7b4ad3f92a1a8f69d2bb8507700380e125" gracePeriod=2 Nov 24 12:54:56 crc kubenswrapper[4756]: I1124 12:54:56.607120 4756 generic.go:334] "Generic (PLEG): container finished" podID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerID="6f20eacbc7e844ab2511bfa33acade7b4ad3f92a1a8f69d2bb8507700380e125" exitCode=0 Nov 24 12:54:56 crc kubenswrapper[4756]: I1124 12:54:56.607190 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerDied","Data":"6f20eacbc7e844ab2511bfa33acade7b4ad3f92a1a8f69d2bb8507700380e125"} Nov 24 12:54:56 crc kubenswrapper[4756]: I1124 12:54:56.843669 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.008428 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities\") pod \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.008518 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content\") pod \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.008621 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94b8\" (UniqueName: \"kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8\") pod \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\" (UID: \"8c6eafc7-5876-4e19-b9e6-b945663c3e68\") " Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.009617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities" (OuterVolumeSpecName: "utilities") pod "8c6eafc7-5876-4e19-b9e6-b945663c3e68" (UID: "8c6eafc7-5876-4e19-b9e6-b945663c3e68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.016032 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8" (OuterVolumeSpecName: "kube-api-access-w94b8") pod "8c6eafc7-5876-4e19-b9e6-b945663c3e68" (UID: "8c6eafc7-5876-4e19-b9e6-b945663c3e68"). InnerVolumeSpecName "kube-api-access-w94b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.118389 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.118492 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w94b8\" (UniqueName: \"kubernetes.io/projected/8c6eafc7-5876-4e19-b9e6-b945663c3e68-kube-api-access-w94b8\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.145090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c6eafc7-5876-4e19-b9e6-b945663c3e68" (UID: "8c6eafc7-5876-4e19-b9e6-b945663c3e68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.220335 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c6eafc7-5876-4e19-b9e6-b945663c3e68-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.619845 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5jd9" event={"ID":"8c6eafc7-5876-4e19-b9e6-b945663c3e68","Type":"ContainerDied","Data":"1915e24cd90f33d9bd11cadcc92169790307c12986c1f1917778eec0d0906f5b"} Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.619925 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5jd9" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.619926 4756 scope.go:117] "RemoveContainer" containerID="6f20eacbc7e844ab2511bfa33acade7b4ad3f92a1a8f69d2bb8507700380e125" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.644434 4756 scope.go:117] "RemoveContainer" containerID="0dff66685d304cb3b8d5f0b5d9c31cbecd364a374eb94a994ea3529d0b75d40b" Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.659622 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.667761 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5jd9"] Nov 24 12:54:57 crc kubenswrapper[4756]: I1124 12:54:57.682462 4756 scope.go:117] "RemoveContainer" containerID="3dded7fc89d97ce26b16a9cbfc899ab7d319dd0901aa3de3be94066546573c40" Nov 24 12:54:58 crc kubenswrapper[4756]: I1124 12:54:58.532765 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" path="/var/lib/kubelet/pods/8c6eafc7-5876-4e19-b9e6-b945663c3e68/volumes" Nov 24 12:54:59 crc kubenswrapper[4756]: I1124 12:54:59.476060 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:54:59 crc kubenswrapper[4756]: E1124 12:54:59.476864 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:55:07 crc kubenswrapper[4756]: I1124 12:55:07.082613 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wrtzq"] Nov 24 12:55:07 crc kubenswrapper[4756]: I1124 12:55:07.102965 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wrtzq"] Nov 24 12:55:08 crc kubenswrapper[4756]: I1124 12:55:08.490336 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f689ad-3620-48b3-ae57-d19148ecb376" path="/var/lib/kubelet/pods/a6f689ad-3620-48b3-ae57-d19148ecb376/volumes" Nov 24 12:55:10 crc kubenswrapper[4756]: I1124 12:55:10.477300 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:55:10 crc kubenswrapper[4756]: E1124 12:55:10.478267 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:55:18 crc kubenswrapper[4756]: I1124 12:55:18.041457 4756 scope.go:117] "RemoveContainer" containerID="bfc21104f20e44b50196ee6e3e300bc2a3506de3eaa9357e6749792fd13c36c7" Nov 24 12:55:18 crc kubenswrapper[4756]: I1124 12:55:18.096111 4756 scope.go:117] "RemoveContainer" containerID="dd5a77e7e4ce162d2a1a10457c784717db0a296e1201d03b2c00f36db1cfed81" Nov 24 12:55:18 crc kubenswrapper[4756]: I1124 12:55:18.152041 4756 scope.go:117] "RemoveContainer" containerID="21bccc702df2c98b4b2375138a4db6a15211f89153dd8eba4710d3d800d3f556" Nov 24 12:55:18 crc kubenswrapper[4756]: I1124 12:55:18.213836 4756 scope.go:117] "RemoveContainer" containerID="881bcbf2234f7f14037c37522ccfd4e162d69ae2578f095d3d56d6560f1b05e0" Nov 24 12:55:20 crc kubenswrapper[4756]: I1124 12:55:20.906635 4756 generic.go:334] "Generic (PLEG): container finished" podID="26be1a13-f657-4240-ba64-a260d9a6355a" containerID="697de36d702f8461d48290c0ad16b0f44eae9fffad7b7442c63e5adf1bc819b8" exitCode=0 Nov 24 12:55:20 crc kubenswrapper[4756]: I1124 12:55:20.906762 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" event={"ID":"26be1a13-f657-4240-ba64-a260d9a6355a","Type":"ContainerDied","Data":"697de36d702f8461d48290c0ad16b0f44eae9fffad7b7442c63e5adf1bc819b8"} Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.376989 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.437470 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory\") pod \"26be1a13-f657-4240-ba64-a260d9a6355a\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.438012 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfn9j\" (UniqueName: \"kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j\") pod \"26be1a13-f657-4240-ba64-a260d9a6355a\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.438062 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key\") pod \"26be1a13-f657-4240-ba64-a260d9a6355a\" (UID: \"26be1a13-f657-4240-ba64-a260d9a6355a\") " Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.445120 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j" (OuterVolumeSpecName: "kube-api-access-pfn9j") pod "26be1a13-f657-4240-ba64-a260d9a6355a" (UID: "26be1a13-f657-4240-ba64-a260d9a6355a"). InnerVolumeSpecName "kube-api-access-pfn9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.469697 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory" (OuterVolumeSpecName: "inventory") pod "26be1a13-f657-4240-ba64-a260d9a6355a" (UID: "26be1a13-f657-4240-ba64-a260d9a6355a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.470145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "26be1a13-f657-4240-ba64-a260d9a6355a" (UID: "26be1a13-f657-4240-ba64-a260d9a6355a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.541292 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfn9j\" (UniqueName: \"kubernetes.io/projected/26be1a13-f657-4240-ba64-a260d9a6355a-kube-api-access-pfn9j\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.541350 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.541366 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26be1a13-f657-4240-ba64-a260d9a6355a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:55:22 crc kubenswrapper[4756]: E1124 12:55:22.745104 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26be1a13_f657_4240_ba64_a260d9a6355a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26be1a13_f657_4240_ba64_a260d9a6355a.slice/crio-3f0248763ef270042d3759e7d40318874511787baaafc68ae1987c461dcda44e\": RecentStats: unable to find data in memory cache]" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.931000 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" event={"ID":"26be1a13-f657-4240-ba64-a260d9a6355a","Type":"ContainerDied","Data":"3f0248763ef270042d3759e7d40318874511787baaafc68ae1987c461dcda44e"} Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.931385 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0248763ef270042d3759e7d40318874511787baaafc68ae1987c461dcda44e" Nov 24 12:55:22 crc kubenswrapper[4756]: I1124 12:55:22.931075 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.034397 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg"] Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.034890 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.034910 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.034927 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.034935 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.034954 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.034961 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.034973 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.034980 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.034992 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26be1a13-f657-4240-ba64-a260d9a6355a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035001 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="26be1a13-f657-4240-ba64-a260d9a6355a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.035038 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035046 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.035055 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035062 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="extract-content" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.035073 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035080 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.035099 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035106 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: E1124 12:55:23.035119 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035126 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="extract-utilities" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035369 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6eafc7-5876-4e19-b9e6-b945663c3e68" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035393 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32ab267-a3aa-4fa5-80e5-ebbe78465af3" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035405 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="26be1a13-f657-4240-ba64-a260d9a6355a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.035422 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb6d572-e690-4143-aeff-982f9371c75d" containerName="registry-server" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.036257 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.038899 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.039059 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.039297 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.039547 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.045802 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg"] Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.158675 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbnxd\" (UniqueName: \"kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.159023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.159489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.262003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnxd\" (UniqueName: \"kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.262147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.262204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.266642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.267637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.287965 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnxd\" (UniqueName: \"kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.361594 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.926475 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg"] Nov 24 12:55:23 crc kubenswrapper[4756]: I1124 12:55:23.944918 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" event={"ID":"529df660-5b77-4ba7-b190-02acb8a8de9c","Type":"ContainerStarted","Data":"db2bfd889d39396e403a340611105945290473b95f43d51f2af11d87ae17c299"} Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.069830 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4kphn"] Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.082059 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9bg6g"] Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.093872 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9bg6g"] Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.103886 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4kphn"] Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.476293 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:55:24 crc kubenswrapper[4756]: E1124 12:55:24.477011 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.490696 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efa48e0-4cac-4152-8920-74324d606778" path="/var/lib/kubelet/pods/3efa48e0-4cac-4152-8920-74324d606778/volumes" Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.492105 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee404871-3e83-4fa1-a773-df0c95222c32" path="/var/lib/kubelet/pods/ee404871-3e83-4fa1-a773-df0c95222c32/volumes" Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.964573 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" event={"ID":"529df660-5b77-4ba7-b190-02acb8a8de9c","Type":"ContainerStarted","Data":"208fb511a6f0d61ff365b1e07fdfbe79d4c6e0aacf3ad376ce231b5658f9e3cb"} Nov 24 12:55:24 crc kubenswrapper[4756]: I1124 12:55:24.989311 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" podStartSLOduration=1.5428795119999998 podStartE2EDuration="1.989286312s" podCreationTimestamp="2025-11-24 12:55:23 +0000 UTC" firstStartedPulling="2025-11-24 12:55:23.930949443 +0000 UTC m=+1656.288463585" lastFinishedPulling="2025-11-24 12:55:24.377356243 +0000 UTC m=+1656.734870385" observedRunningTime="2025-11-24 12:55:24.980110714 +0000 UTC m=+1657.337624856" watchObservedRunningTime="2025-11-24 12:55:24.989286312 +0000 UTC m=+1657.346800454" Nov 24 12:55:36 crc kubenswrapper[4756]: I1124 12:55:36.032658 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wz84p"] Nov 24 12:55:36 crc kubenswrapper[4756]: I1124 12:55:36.042814 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wz84p"] Nov 24 12:55:36 crc kubenswrapper[4756]: I1124 12:55:36.495867 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f485ab9-01fd-4640-833e-8ee586798f2e" path="/var/lib/kubelet/pods/8f485ab9-01fd-4640-833e-8ee586798f2e/volumes" Nov 24 12:55:37 crc kubenswrapper[4756]: I1124 12:55:37.034995 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vbnrc"] Nov 24 12:55:37 crc kubenswrapper[4756]: I1124 12:55:37.044881 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vbnrc"] Nov 24 12:55:38 crc kubenswrapper[4756]: I1124 12:55:38.485077 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:55:38 crc kubenswrapper[4756]: E1124 12:55:38.485418 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:55:38 crc kubenswrapper[4756]: I1124 12:55:38.491625 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e34263-c415-4300-a110-ab2ad6787566" path="/var/lib/kubelet/pods/e5e34263-c415-4300-a110-ab2ad6787566/volumes" Nov 24 12:55:52 crc kubenswrapper[4756]: I1124 12:55:52.476596 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:55:52 crc kubenswrapper[4756]: E1124 12:55:52.477886 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:07 crc kubenswrapper[4756]: I1124 12:56:07.475322 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:56:07 crc kubenswrapper[4756]: E1124 12:56:07.476024 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:18 crc kubenswrapper[4756]: I1124 12:56:18.367295 4756 scope.go:117] "RemoveContainer" containerID="635fe80f24e6a7a1873834ce53a8e72cd5fc4fc981da9a13a467ad87cd828a3a" Nov 24 12:56:18 crc kubenswrapper[4756]: I1124 12:56:18.412629 4756 scope.go:117] "RemoveContainer" containerID="45f7b658dcfd3d76440a1bb287296bee264cefd411037fb9f7c5fa6f44b7b6b3" Nov 24 12:56:18 crc kubenswrapper[4756]: I1124 12:56:18.482049 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:56:18 crc kubenswrapper[4756]: E1124 12:56:18.482350 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:18 crc kubenswrapper[4756]: I1124 12:56:18.500431 4756 scope.go:117] "RemoveContainer" containerID="2530f55973ddededcba23f59504f547a30369490a802a58d58ad6857f48ea262" Nov 24 12:56:18 crc kubenswrapper[4756]: I1124 12:56:18.535139 4756 scope.go:117] "RemoveContainer" containerID="0e8549083469d6c6c9629a847ea97c5f2f5c3c8a75454bc7a7442d5004766ab6" Nov 24 12:56:22 crc kubenswrapper[4756]: I1124 12:56:22.042690 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2119-account-create-2ctwr"] Nov 24 12:56:22 crc kubenswrapper[4756]: I1124 12:56:22.052030 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2119-account-create-2ctwr"] Nov 24 12:56:22 crc kubenswrapper[4756]: I1124 12:56:22.500369 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59840fc2-c2e9-4258-8857-7c48709f1436" path="/var/lib/kubelet/pods/59840fc2-c2e9-4258-8857-7c48709f1436/volumes" Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.035201 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fd9c-account-create-zxl5z"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.041957 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-45l44"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.048541 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ba07-account-create-gdv5z"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.056371 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qn6sw"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.065808 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mbg9c"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.073252 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fd9c-account-create-zxl5z"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.081481 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-45l44"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.088876 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qn6sw"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.095447 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ba07-account-create-gdv5z"] Nov 24 12:56:23 crc kubenswrapper[4756]: I1124 12:56:23.103511 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mbg9c"] Nov 24 12:56:24 crc kubenswrapper[4756]: I1124 12:56:24.488760 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346aceff-936e-4266-aede-d48f252896f0" path="/var/lib/kubelet/pods/346aceff-936e-4266-aede-d48f252896f0/volumes" Nov 24 12:56:24 crc kubenswrapper[4756]: I1124 12:56:24.490014 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40bd0562-377e-44fe-9e32-5b8256063b23" path="/var/lib/kubelet/pods/40bd0562-377e-44fe-9e32-5b8256063b23/volumes" Nov 24 12:56:24 crc kubenswrapper[4756]: I1124 12:56:24.490845 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97cfae0a-2fbc-495b-8084-b0b1701e541b" path="/var/lib/kubelet/pods/97cfae0a-2fbc-495b-8084-b0b1701e541b/volumes" Nov 24 12:56:24 crc kubenswrapper[4756]: I1124 12:56:24.491887 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5090b91-9f36-4fb1-95d0-aa6a48ae2bed" path="/var/lib/kubelet/pods/f5090b91-9f36-4fb1-95d0-aa6a48ae2bed/volumes" Nov 24 12:56:24 crc kubenswrapper[4756]: I1124 12:56:24.493428 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2a0ae3-8559-497c-becc-d2b6dc77065c" path="/var/lib/kubelet/pods/fb2a0ae3-8559-497c-becc-d2b6dc77065c/volumes" Nov 24 12:56:31 crc kubenswrapper[4756]: I1124 12:56:31.475852 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:56:31 crc kubenswrapper[4756]: E1124 12:56:31.476678 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:37 crc kubenswrapper[4756]: I1124 12:56:37.677245 4756 generic.go:334] "Generic (PLEG): container finished" podID="529df660-5b77-4ba7-b190-02acb8a8de9c" containerID="208fb511a6f0d61ff365b1e07fdfbe79d4c6e0aacf3ad376ce231b5658f9e3cb" exitCode=0 Nov 24 12:56:37 crc kubenswrapper[4756]: I1124 12:56:37.677317 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" event={"ID":"529df660-5b77-4ba7-b190-02acb8a8de9c","Type":"ContainerDied","Data":"208fb511a6f0d61ff365b1e07fdfbe79d4c6e0aacf3ad376ce231b5658f9e3cb"} Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.107456 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.268577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key\") pod \"529df660-5b77-4ba7-b190-02acb8a8de9c\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.268780 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbnxd\" (UniqueName: \"kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd\") pod \"529df660-5b77-4ba7-b190-02acb8a8de9c\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.268855 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory\") pod \"529df660-5b77-4ba7-b190-02acb8a8de9c\" (UID: \"529df660-5b77-4ba7-b190-02acb8a8de9c\") " Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.274633 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd" (OuterVolumeSpecName: "kube-api-access-pbnxd") pod "529df660-5b77-4ba7-b190-02acb8a8de9c" (UID: "529df660-5b77-4ba7-b190-02acb8a8de9c"). InnerVolumeSpecName "kube-api-access-pbnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.299483 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory" (OuterVolumeSpecName: "inventory") pod "529df660-5b77-4ba7-b190-02acb8a8de9c" (UID: "529df660-5b77-4ba7-b190-02acb8a8de9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.301754 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "529df660-5b77-4ba7-b190-02acb8a8de9c" (UID: "529df660-5b77-4ba7-b190-02acb8a8de9c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.374588 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbnxd\" (UniqueName: \"kubernetes.io/projected/529df660-5b77-4ba7-b190-02acb8a8de9c-kube-api-access-pbnxd\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.374643 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.374662 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/529df660-5b77-4ba7-b190-02acb8a8de9c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.699032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" event={"ID":"529df660-5b77-4ba7-b190-02acb8a8de9c","Type":"ContainerDied","Data":"db2bfd889d39396e403a340611105945290473b95f43d51f2af11d87ae17c299"} Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.699124 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db2bfd889d39396e403a340611105945290473b95f43d51f2af11d87ae17c299" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.699200 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.778180 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4"] Nov 24 12:56:39 crc kubenswrapper[4756]: E1124 12:56:39.779187 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529df660-5b77-4ba7-b190-02acb8a8de9c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.779210 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="529df660-5b77-4ba7-b190-02acb8a8de9c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.779512 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="529df660-5b77-4ba7-b190-02acb8a8de9c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.780415 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.782321 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.782362 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.783281 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.783328 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.788084 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4"] Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.883010 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2xr\" (UniqueName: \"kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.883101 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.883136 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.985321 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2xr\" (UniqueName: \"kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.985386 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.985408 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.990384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:39 crc kubenswrapper[4756]: I1124 12:56:39.990746 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:40 crc kubenswrapper[4756]: I1124 12:56:40.005900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2xr\" (UniqueName: \"kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:40 crc kubenswrapper[4756]: I1124 12:56:40.098731 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:40 crc kubenswrapper[4756]: I1124 12:56:40.608078 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4"] Nov 24 12:56:40 crc kubenswrapper[4756]: I1124 12:56:40.709531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" event={"ID":"2112cb18-ecf9-43ff-b22c-37044d2b64e2","Type":"ContainerStarted","Data":"b861b7d7c36e1a2a9f68d2fdf8202c9e99c128de36e96594bb4a20b7a720ad9d"} Nov 24 12:56:41 crc kubenswrapper[4756]: I1124 12:56:41.750781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" event={"ID":"2112cb18-ecf9-43ff-b22c-37044d2b64e2","Type":"ContainerStarted","Data":"73fad503dbbb64e8deda54a1de50b747d16a55fbe2f4552fc8ac85f82fea6ad5"} Nov 24 12:56:41 crc kubenswrapper[4756]: I1124 12:56:41.777542 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" podStartSLOduration=2.318767943 podStartE2EDuration="2.777521688s" podCreationTimestamp="2025-11-24 12:56:39 +0000 UTC" firstStartedPulling="2025-11-24 12:56:40.615579904 +0000 UTC m=+1732.973094046" lastFinishedPulling="2025-11-24 12:56:41.074333639 +0000 UTC m=+1733.431847791" observedRunningTime="2025-11-24 12:56:41.773352325 +0000 UTC m=+1734.130866467" watchObservedRunningTime="2025-11-24 12:56:41.777521688 +0000 UTC m=+1734.135035830" Nov 24 12:56:42 crc kubenswrapper[4756]: I1124 12:56:42.477366 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:56:42 crc kubenswrapper[4756]: E1124 12:56:42.478124 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:46 crc kubenswrapper[4756]: I1124 12:56:46.796177 4756 generic.go:334] "Generic (PLEG): container finished" podID="2112cb18-ecf9-43ff-b22c-37044d2b64e2" containerID="73fad503dbbb64e8deda54a1de50b747d16a55fbe2f4552fc8ac85f82fea6ad5" exitCode=0 Nov 24 12:56:46 crc kubenswrapper[4756]: I1124 12:56:46.796221 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" event={"ID":"2112cb18-ecf9-43ff-b22c-37044d2b64e2","Type":"ContainerDied","Data":"73fad503dbbb64e8deda54a1de50b747d16a55fbe2f4552fc8ac85f82fea6ad5"} Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.270218 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.361878 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key\") pod \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.361956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory\") pod \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.362120 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq2xr\" (UniqueName: \"kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr\") pod \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\" (UID: \"2112cb18-ecf9-43ff-b22c-37044d2b64e2\") " Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.371540 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr" (OuterVolumeSpecName: "kube-api-access-bq2xr") pod "2112cb18-ecf9-43ff-b22c-37044d2b64e2" (UID: "2112cb18-ecf9-43ff-b22c-37044d2b64e2"). InnerVolumeSpecName "kube-api-access-bq2xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.407536 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2112cb18-ecf9-43ff-b22c-37044d2b64e2" (UID: "2112cb18-ecf9-43ff-b22c-37044d2b64e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.417208 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory" (OuterVolumeSpecName: "inventory") pod "2112cb18-ecf9-43ff-b22c-37044d2b64e2" (UID: "2112cb18-ecf9-43ff-b22c-37044d2b64e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.465677 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.465735 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2112cb18-ecf9-43ff-b22c-37044d2b64e2-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.465748 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq2xr\" (UniqueName: \"kubernetes.io/projected/2112cb18-ecf9-43ff-b22c-37044d2b64e2-kube-api-access-bq2xr\") on node \"crc\" DevicePath \"\"" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.869738 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" event={"ID":"2112cb18-ecf9-43ff-b22c-37044d2b64e2","Type":"ContainerDied","Data":"b861b7d7c36e1a2a9f68d2fdf8202c9e99c128de36e96594bb4a20b7a720ad9d"} Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.869800 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.869802 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b861b7d7c36e1a2a9f68d2fdf8202c9e99c128de36e96594bb4a20b7a720ad9d" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.926459 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l"] Nov 24 12:56:48 crc kubenswrapper[4756]: E1124 12:56:48.926865 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2112cb18-ecf9-43ff-b22c-37044d2b64e2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.926885 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2112cb18-ecf9-43ff-b22c-37044d2b64e2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.927096 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2112cb18-ecf9-43ff-b22c-37044d2b64e2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.927784 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.930809 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.930888 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.935500 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.935746 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.951102 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l"] Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.975350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n659f\" (UniqueName: \"kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.975789 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:48 crc kubenswrapper[4756]: I1124 12:56:48.975858 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.076908 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.076978 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.077087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n659f\" (UniqueName: \"kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.082884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.084707 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.104212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n659f\" (UniqueName: \"kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-z5l4l\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.247650 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.814744 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l"] Nov 24 12:56:49 crc kubenswrapper[4756]: W1124 12:56:49.819557 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d74c7df_a689_4879_9319_a808a2f726cb.slice/crio-26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938 WatchSource:0}: Error finding container 26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938: Status 404 returned error can't find the container with id 26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938 Nov 24 12:56:49 crc kubenswrapper[4756]: I1124 12:56:49.881280 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" event={"ID":"6d74c7df-a689-4879-9319-a808a2f726cb","Type":"ContainerStarted","Data":"26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938"} Nov 24 12:56:50 crc kubenswrapper[4756]: I1124 12:56:50.890861 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" event={"ID":"6d74c7df-a689-4879-9319-a808a2f726cb","Type":"ContainerStarted","Data":"4770a7958d37afe9e14e6f6a0f99950b410f49310d532067d59a0f1d6a69eb06"} Nov 24 12:56:50 crc kubenswrapper[4756]: I1124 12:56:50.906403 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" podStartSLOduration=2.290228305 podStartE2EDuration="2.906384509s" podCreationTimestamp="2025-11-24 12:56:48 +0000 UTC" firstStartedPulling="2025-11-24 12:56:49.821865371 +0000 UTC m=+1742.179379513" lastFinishedPulling="2025-11-24 12:56:50.438021575 +0000 UTC m=+1742.795535717" observedRunningTime="2025-11-24 12:56:50.906262186 +0000 UTC m=+1743.263776358" watchObservedRunningTime="2025-11-24 12:56:50.906384509 +0000 UTC m=+1743.263898651" Nov 24 12:56:53 crc kubenswrapper[4756]: I1124 12:56:53.476493 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:56:53 crc kubenswrapper[4756]: E1124 12:56:53.477318 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:56:56 crc kubenswrapper[4756]: I1124 12:56:56.054230 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mxm8r"] Nov 24 12:56:56 crc kubenswrapper[4756]: I1124 12:56:56.064405 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mxm8r"] Nov 24 12:56:56 crc kubenswrapper[4756]: I1124 12:56:56.489742 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77ddfde-c76b-4fea-b0a6-fcd470aa87a8" path="/var/lib/kubelet/pods/a77ddfde-c76b-4fea-b0a6-fcd470aa87a8/volumes" Nov 24 12:57:05 crc kubenswrapper[4756]: I1124 12:57:05.476628 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:57:05 crc kubenswrapper[4756]: E1124 12:57:05.477713 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.695116 4756 scope.go:117] "RemoveContainer" containerID="471257f0bae6820d9871d58a02713c0a0a133c2ea5884e630e526b2109354f36" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.734817 4756 scope.go:117] "RemoveContainer" containerID="ab9299a31a21747d7b25a7f96259d026a67dd463f6d0ce87a644694df9c6d32e" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.767998 4756 scope.go:117] "RemoveContainer" containerID="686728c23356b5906da3533de4d5d4d335cdeb3de3f10c32716073715c24af4f" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.843414 4756 scope.go:117] "RemoveContainer" containerID="83148707b1b4f63aaaaa8abdaf29d945c528d087c54856ddef4a70c7d6de879b" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.871717 4756 scope.go:117] "RemoveContainer" containerID="f9cdafc88327978245679d7b1fb94759b02f09e765a19779a0c43261a7192ff8" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.933557 4756 scope.go:117] "RemoveContainer" containerID="30a76c4adc68a52b75c47cb2f2be3ecddbb7554883535a49cbb7ea5746deecb6" Nov 24 12:57:18 crc kubenswrapper[4756]: I1124 12:57:18.959647 4756 scope.go:117] "RemoveContainer" containerID="3068fdd3e96ede4aa0d3f49496431361e47567a25276ad28c333b6a0f4daf407" Nov 24 12:57:20 crc kubenswrapper[4756]: I1124 12:57:20.047010 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vkkv"] Nov 24 12:57:20 crc kubenswrapper[4756]: I1124 12:57:20.055056 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vkkv"] Nov 24 12:57:20 crc kubenswrapper[4756]: I1124 12:57:20.475647 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:57:20 crc kubenswrapper[4756]: E1124 12:57:20.475921 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:57:20 crc kubenswrapper[4756]: I1124 12:57:20.489592 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04cac76-134f-4232-aaad-c16ec2ef43dc" path="/var/lib/kubelet/pods/e04cac76-134f-4232-aaad-c16ec2ef43dc/volumes" Nov 24 12:57:21 crc kubenswrapper[4756]: I1124 12:57:21.035508 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-877px"] Nov 24 12:57:21 crc kubenswrapper[4756]: I1124 12:57:21.043311 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-877px"] Nov 24 12:57:22 crc kubenswrapper[4756]: I1124 12:57:22.490614 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f3e378-99a3-4ef7-b4a2-15efaa919862" path="/var/lib/kubelet/pods/07f3e378-99a3-4ef7-b4a2-15efaa919862/volumes" Nov 24 12:57:31 crc kubenswrapper[4756]: I1124 12:57:31.475669 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:57:31 crc kubenswrapper[4756]: E1124 12:57:31.476476 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:57:32 crc kubenswrapper[4756]: I1124 12:57:32.347653 4756 generic.go:334] "Generic (PLEG): container finished" podID="6d74c7df-a689-4879-9319-a808a2f726cb" containerID="4770a7958d37afe9e14e6f6a0f99950b410f49310d532067d59a0f1d6a69eb06" exitCode=0 Nov 24 12:57:32 crc kubenswrapper[4756]: I1124 12:57:32.347721 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" event={"ID":"6d74c7df-a689-4879-9319-a808a2f726cb","Type":"ContainerDied","Data":"4770a7958d37afe9e14e6f6a0f99950b410f49310d532067d59a0f1d6a69eb06"} Nov 24 12:57:33 crc kubenswrapper[4756]: I1124 12:57:33.895508 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.040193 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key\") pod \"6d74c7df-a689-4879-9319-a808a2f726cb\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.040245 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory\") pod \"6d74c7df-a689-4879-9319-a808a2f726cb\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.040443 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n659f\" (UniqueName: \"kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f\") pod \"6d74c7df-a689-4879-9319-a808a2f726cb\" (UID: \"6d74c7df-a689-4879-9319-a808a2f726cb\") " Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.047104 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f" (OuterVolumeSpecName: "kube-api-access-n659f") pod "6d74c7df-a689-4879-9319-a808a2f726cb" (UID: "6d74c7df-a689-4879-9319-a808a2f726cb"). InnerVolumeSpecName "kube-api-access-n659f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.072483 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d74c7df-a689-4879-9319-a808a2f726cb" (UID: "6d74c7df-a689-4879-9319-a808a2f726cb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.074181 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory" (OuterVolumeSpecName: "inventory") pod "6d74c7df-a689-4879-9319-a808a2f726cb" (UID: "6d74c7df-a689-4879-9319-a808a2f726cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.143435 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.143474 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d74c7df-a689-4879-9319-a808a2f726cb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.143490 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n659f\" (UniqueName: \"kubernetes.io/projected/6d74c7df-a689-4879-9319-a808a2f726cb-kube-api-access-n659f\") on node \"crc\" DevicePath \"\"" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.369863 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" event={"ID":"6d74c7df-a689-4879-9319-a808a2f726cb","Type":"ContainerDied","Data":"26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938"} Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.369902 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26cef1cdd429ec75abf5829f12233ca893db4c91e76682092c2ada027b7b6938" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.369944 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-z5l4l" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.463885 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj"] Nov 24 12:57:34 crc kubenswrapper[4756]: E1124 12:57:34.464551 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d74c7df-a689-4879-9319-a808a2f726cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.464580 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d74c7df-a689-4879-9319-a808a2f726cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.464806 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d74c7df-a689-4879-9319-a808a2f726cb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.465677 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.468453 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.469000 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.469307 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.469544 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.472486 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj"] Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.552218 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.552903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.553017 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqps\" (UniqueName: \"kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.655700 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.655853 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqps\" (UniqueName: \"kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.656068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.664059 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.664273 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.677098 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqps\" (UniqueName: \"kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:34 crc kubenswrapper[4756]: I1124 12:57:34.796480 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:57:35 crc kubenswrapper[4756]: I1124 12:57:35.402144 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj"] Nov 24 12:57:36 crc kubenswrapper[4756]: I1124 12:57:36.401204 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" event={"ID":"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842","Type":"ContainerStarted","Data":"d03316b0aed3a873616ddcb5ea3545af9923681c88a5c0ef9592f7ae503c5f8e"} Nov 24 12:57:36 crc kubenswrapper[4756]: I1124 12:57:36.401834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" event={"ID":"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842","Type":"ContainerStarted","Data":"d62901375556759af2261d0703c5265cce1b9c0ffb4cb835b0f3571562d38205"} Nov 24 12:57:36 crc kubenswrapper[4756]: I1124 12:57:36.424568 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" podStartSLOduration=1.907402425 podStartE2EDuration="2.4245487s" podCreationTimestamp="2025-11-24 12:57:34 +0000 UTC" firstStartedPulling="2025-11-24 12:57:35.416110368 +0000 UTC m=+1787.773624520" lastFinishedPulling="2025-11-24 12:57:35.933256633 +0000 UTC m=+1788.290770795" observedRunningTime="2025-11-24 12:57:36.418282649 +0000 UTC m=+1788.775796851" watchObservedRunningTime="2025-11-24 12:57:36.4245487 +0000 UTC m=+1788.782062842" Nov 24 12:57:43 crc kubenswrapper[4756]: I1124 12:57:43.475779 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:57:43 crc kubenswrapper[4756]: E1124 12:57:43.476808 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:57:54 crc kubenswrapper[4756]: I1124 12:57:54.476780 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:57:54 crc kubenswrapper[4756]: E1124 12:57:54.477742 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:58:05 crc kubenswrapper[4756]: I1124 12:58:05.058352 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tw58s"] Nov 24 12:58:05 crc kubenswrapper[4756]: I1124 12:58:05.075299 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tw58s"] Nov 24 12:58:06 crc kubenswrapper[4756]: I1124 12:58:06.492821 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573ef3b1-3c55-4e67-9df0-d52895183be8" path="/var/lib/kubelet/pods/573ef3b1-3c55-4e67-9df0-d52895183be8/volumes" Nov 24 12:58:07 crc kubenswrapper[4756]: I1124 12:58:07.475374 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:58:07 crc kubenswrapper[4756]: E1124 12:58:07.475810 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:58:19 crc kubenswrapper[4756]: I1124 12:58:19.149152 4756 scope.go:117] "RemoveContainer" containerID="b2899f41db0e09f50eb9449220402c47b929894868609b21451cfe902e4a0d44" Nov 24 12:58:19 crc kubenswrapper[4756]: I1124 12:58:19.211677 4756 scope.go:117] "RemoveContainer" containerID="abd06601e0e008f8e6bb0a9a5b28d77bddab957f254155be667d2c760f120413" Nov 24 12:58:19 crc kubenswrapper[4756]: I1124 12:58:19.251248 4756 scope.go:117] "RemoveContainer" containerID="790f35ce6630d64bd630a8b33b7865a59b54f5aebb2f69576d79a4bdaaddc318" Nov 24 12:58:19 crc kubenswrapper[4756]: I1124 12:58:19.476229 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:58:19 crc kubenswrapper[4756]: E1124 12:58:19.476632 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:58:32 crc kubenswrapper[4756]: I1124 12:58:32.477332 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:58:32 crc kubenswrapper[4756]: E1124 12:58:32.478958 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:58:33 crc kubenswrapper[4756]: I1124 12:58:33.000141 4756 generic.go:334] "Generic (PLEG): container finished" podID="2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" containerID="d03316b0aed3a873616ddcb5ea3545af9923681c88a5c0ef9592f7ae503c5f8e" exitCode=0 Nov 24 12:58:33 crc kubenswrapper[4756]: I1124 12:58:33.000288 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" event={"ID":"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842","Type":"ContainerDied","Data":"d03316b0aed3a873616ddcb5ea3545af9923681c88a5c0ef9592f7ae503c5f8e"} Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.461681 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.567947 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory\") pod \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.568010 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfqps\" (UniqueName: \"kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps\") pod \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.568060 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key\") pod \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\" (UID: \"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842\") " Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.574776 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps" (OuterVolumeSpecName: "kube-api-access-cfqps") pod "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" (UID: "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842"). InnerVolumeSpecName "kube-api-access-cfqps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.596851 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory" (OuterVolumeSpecName: "inventory") pod "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" (UID: "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.611523 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" (UID: "2750f3ce-2cc3-41e8-a2b5-7a96e17c9842"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.670572 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.670608 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfqps\" (UniqueName: \"kubernetes.io/projected/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-kube-api-access-cfqps\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:34 crc kubenswrapper[4756]: I1124 12:58:34.670619 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2750f3ce-2cc3-41e8-a2b5-7a96e17c9842-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.024540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" event={"ID":"2750f3ce-2cc3-41e8-a2b5-7a96e17c9842","Type":"ContainerDied","Data":"d62901375556759af2261d0703c5265cce1b9c0ffb4cb835b0f3571562d38205"} Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.024587 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62901375556759af2261d0703c5265cce1b9c0ffb4cb835b0f3571562d38205" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.024619 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.135743 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c6sz4"] Nov 24 12:58:35 crc kubenswrapper[4756]: E1124 12:58:35.136255 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.136280 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.136569 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2750f3ce-2cc3-41e8-a2b5-7a96e17c9842" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.137548 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.139932 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.140243 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.140477 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.140795 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.149824 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c6sz4"] Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.181699 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.181751 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtr5l\" (UniqueName: \"kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.181819 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.284272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.284343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtr5l\" (UniqueName: \"kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.284426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.290537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.297705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.303847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtr5l\" (UniqueName: \"kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l\") pod \"ssh-known-hosts-edpm-deployment-c6sz4\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:35 crc kubenswrapper[4756]: I1124 12:58:35.467109 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:36 crc kubenswrapper[4756]: I1124 12:58:36.022767 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c6sz4"] Nov 24 12:58:36 crc kubenswrapper[4756]: W1124 12:58:36.032984 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1a9d2c_d952_4c28_9b75_11ce1ec4f5a1.slice/crio-0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046 WatchSource:0}: Error finding container 0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046: Status 404 returned error can't find the container with id 0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046 Nov 24 12:58:36 crc kubenswrapper[4756]: I1124 12:58:36.035829 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 12:58:37 crc kubenswrapper[4756]: I1124 12:58:37.046521 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" event={"ID":"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1","Type":"ContainerStarted","Data":"e32ded2e00a2cf6fb5b7ee53f3a68bed61bf4a75e0b959ddb7817ee0192fc12d"} Nov 24 12:58:37 crc kubenswrapper[4756]: I1124 12:58:37.046993 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" event={"ID":"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1","Type":"ContainerStarted","Data":"0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046"} Nov 24 12:58:37 crc kubenswrapper[4756]: I1124 12:58:37.118130 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" podStartSLOduration=1.561312938 podStartE2EDuration="2.118103576s" podCreationTimestamp="2025-11-24 12:58:35 +0000 UTC" firstStartedPulling="2025-11-24 12:58:36.035639721 +0000 UTC m=+1848.393153863" lastFinishedPulling="2025-11-24 12:58:36.592430319 +0000 UTC m=+1848.949944501" observedRunningTime="2025-11-24 12:58:37.066770644 +0000 UTC m=+1849.424284796" watchObservedRunningTime="2025-11-24 12:58:37.118103576 +0000 UTC m=+1849.475617718" Nov 24 12:58:45 crc kubenswrapper[4756]: I1124 12:58:45.134860 4756 generic.go:334] "Generic (PLEG): container finished" podID="8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" containerID="e32ded2e00a2cf6fb5b7ee53f3a68bed61bf4a75e0b959ddb7817ee0192fc12d" exitCode=0 Nov 24 12:58:45 crc kubenswrapper[4756]: I1124 12:58:45.135024 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" event={"ID":"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1","Type":"ContainerDied","Data":"e32ded2e00a2cf6fb5b7ee53f3a68bed61bf4a75e0b959ddb7817ee0192fc12d"} Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.485578 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:58:46 crc kubenswrapper[4756]: E1124 12:58:46.487272 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.635478 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.728148 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam\") pod \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.728547 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0\") pod \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.728725 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtr5l\" (UniqueName: \"kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l\") pod \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\" (UID: \"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1\") " Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.734751 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l" (OuterVolumeSpecName: "kube-api-access-dtr5l") pod "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" (UID: "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1"). InnerVolumeSpecName "kube-api-access-dtr5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.772799 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" (UID: "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.785704 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" (UID: "8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.830648 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.830680 4756 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:46 crc kubenswrapper[4756]: I1124 12:58:46.830689 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtr5l\" (UniqueName: \"kubernetes.io/projected/8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1-kube-api-access-dtr5l\") on node \"crc\" DevicePath \"\"" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.184138 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" event={"ID":"8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1","Type":"ContainerDied","Data":"0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046"} Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.184238 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0675a3c7ed80b1d0dbf52d923e6df039c8c935c9df013255ef20243f5b2f2046" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.184342 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c6sz4" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.251297 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv"] Nov 24 12:58:47 crc kubenswrapper[4756]: E1124 12:58:47.251789 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.251812 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.252086 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1" containerName="ssh-known-hosts-edpm-deployment" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.252950 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.255763 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.255985 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.257142 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.257609 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.281189 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv"] Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.445703 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.445755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.446072 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phll\" (UniqueName: \"kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.548233 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.548389 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.549032 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phll\" (UniqueName: \"kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.559329 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.562251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.574088 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phll\" (UniqueName: \"kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-564fv\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:47 crc kubenswrapper[4756]: I1124 12:58:47.872833 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:58:48 crc kubenswrapper[4756]: I1124 12:58:48.422808 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv"] Nov 24 12:58:49 crc kubenswrapper[4756]: I1124 12:58:49.202844 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" event={"ID":"16cccf4b-aeec-4529-9f5f-547e0df302e1","Type":"ContainerStarted","Data":"0f4d74e655111c8484d84452e6d2b4b28899f06bb1823bb2f406baf2e63d8329"} Nov 24 12:58:49 crc kubenswrapper[4756]: I1124 12:58:49.223173 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:58:50 crc kubenswrapper[4756]: I1124 12:58:50.249260 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" event={"ID":"16cccf4b-aeec-4529-9f5f-547e0df302e1","Type":"ContainerStarted","Data":"b3f9f86db77bbc0c2a492bc9830bbeeda160bcc9c2120ab8a706df7deea4c33a"} Nov 24 12:58:50 crc kubenswrapper[4756]: I1124 12:58:50.273349 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" podStartSLOduration=2.479662246 podStartE2EDuration="3.273324083s" podCreationTimestamp="2025-11-24 12:58:47 +0000 UTC" firstStartedPulling="2025-11-24 12:58:48.426778098 +0000 UTC m=+1860.784292240" lastFinishedPulling="2025-11-24 12:58:49.220439915 +0000 UTC m=+1861.577954077" observedRunningTime="2025-11-24 12:58:50.26992107 +0000 UTC m=+1862.627435242" watchObservedRunningTime="2025-11-24 12:58:50.273324083 +0000 UTC m=+1862.630838245" Nov 24 12:58:59 crc kubenswrapper[4756]: I1124 12:58:59.341111 4756 generic.go:334] "Generic (PLEG): container finished" podID="16cccf4b-aeec-4529-9f5f-547e0df302e1" containerID="b3f9f86db77bbc0c2a492bc9830bbeeda160bcc9c2120ab8a706df7deea4c33a" exitCode=0 Nov 24 12:58:59 crc kubenswrapper[4756]: I1124 12:58:59.341208 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" event={"ID":"16cccf4b-aeec-4529-9f5f-547e0df302e1","Type":"ContainerDied","Data":"b3f9f86db77bbc0c2a492bc9830bbeeda160bcc9c2120ab8a706df7deea4c33a"} Nov 24 12:59:00 crc kubenswrapper[4756]: I1124 12:59:00.898277 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.043597 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key\") pod \"16cccf4b-aeec-4529-9f5f-547e0df302e1\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.043821 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5phll\" (UniqueName: \"kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll\") pod \"16cccf4b-aeec-4529-9f5f-547e0df302e1\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.044713 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory\") pod \"16cccf4b-aeec-4529-9f5f-547e0df302e1\" (UID: \"16cccf4b-aeec-4529-9f5f-547e0df302e1\") " Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.057315 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll" (OuterVolumeSpecName: "kube-api-access-5phll") pod "16cccf4b-aeec-4529-9f5f-547e0df302e1" (UID: "16cccf4b-aeec-4529-9f5f-547e0df302e1"). InnerVolumeSpecName "kube-api-access-5phll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.076649 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory" (OuterVolumeSpecName: "inventory") pod "16cccf4b-aeec-4529-9f5f-547e0df302e1" (UID: "16cccf4b-aeec-4529-9f5f-547e0df302e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.094208 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16cccf4b-aeec-4529-9f5f-547e0df302e1" (UID: "16cccf4b-aeec-4529-9f5f-547e0df302e1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.147603 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.147639 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cccf4b-aeec-4529-9f5f-547e0df302e1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.147652 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5phll\" (UniqueName: \"kubernetes.io/projected/16cccf4b-aeec-4529-9f5f-547e0df302e1-kube-api-access-5phll\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.372759 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" event={"ID":"16cccf4b-aeec-4529-9f5f-547e0df302e1","Type":"ContainerDied","Data":"0f4d74e655111c8484d84452e6d2b4b28899f06bb1823bb2f406baf2e63d8329"} Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.372825 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4d74e655111c8484d84452e6d2b4b28899f06bb1823bb2f406baf2e63d8329" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.372875 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-564fv" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.467971 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6"] Nov 24 12:59:01 crc kubenswrapper[4756]: E1124 12:59:01.468593 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cccf4b-aeec-4529-9f5f-547e0df302e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.468616 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cccf4b-aeec-4529-9f5f-547e0df302e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.468879 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cccf4b-aeec-4529-9f5f-547e0df302e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.469652 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.471777 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.471899 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.471972 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.473252 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.475590 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:59:01 crc kubenswrapper[4756]: E1124 12:59:01.476226 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.483822 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6"] Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.556290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.556327 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.556386 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfxm\" (UniqueName: \"kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.658037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.658079 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.658153 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfxm\" (UniqueName: \"kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.665517 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.667376 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.673558 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfxm\" (UniqueName: \"kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:01 crc kubenswrapper[4756]: I1124 12:59:01.805151 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:02 crc kubenswrapper[4756]: I1124 12:59:02.161014 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6"] Nov 24 12:59:02 crc kubenswrapper[4756]: W1124 12:59:02.163951 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod815b1dea_8fed_47a0_bb79_5eb5bb428c34.slice/crio-c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190 WatchSource:0}: Error finding container c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190: Status 404 returned error can't find the container with id c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190 Nov 24 12:59:02 crc kubenswrapper[4756]: I1124 12:59:02.384388 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" event={"ID":"815b1dea-8fed-47a0-bb79-5eb5bb428c34","Type":"ContainerStarted","Data":"c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190"} Nov 24 12:59:03 crc kubenswrapper[4756]: I1124 12:59:03.397722 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" event={"ID":"815b1dea-8fed-47a0-bb79-5eb5bb428c34","Type":"ContainerStarted","Data":"5cedbb309f5f07a8d5c0b970f487735b614d683068aeb9761dbb398034ec64c2"} Nov 24 12:59:03 crc kubenswrapper[4756]: I1124 12:59:03.422424 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" podStartSLOduration=1.953626979 podStartE2EDuration="2.422402722s" podCreationTimestamp="2025-11-24 12:59:01 +0000 UTC" firstStartedPulling="2025-11-24 12:59:02.166289774 +0000 UTC m=+1874.523803916" lastFinishedPulling="2025-11-24 12:59:02.635065517 +0000 UTC m=+1874.992579659" observedRunningTime="2025-11-24 12:59:03.419227495 +0000 UTC m=+1875.776741687" watchObservedRunningTime="2025-11-24 12:59:03.422402722 +0000 UTC m=+1875.779916874" Nov 24 12:59:13 crc kubenswrapper[4756]: I1124 12:59:13.511590 4756 generic.go:334] "Generic (PLEG): container finished" podID="815b1dea-8fed-47a0-bb79-5eb5bb428c34" containerID="5cedbb309f5f07a8d5c0b970f487735b614d683068aeb9761dbb398034ec64c2" exitCode=0 Nov 24 12:59:13 crc kubenswrapper[4756]: I1124 12:59:13.511668 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" event={"ID":"815b1dea-8fed-47a0-bb79-5eb5bb428c34","Type":"ContainerDied","Data":"5cedbb309f5f07a8d5c0b970f487735b614d683068aeb9761dbb398034ec64c2"} Nov 24 12:59:14 crc kubenswrapper[4756]: I1124 12:59:14.936062 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.097524 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqfxm\" (UniqueName: \"kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm\") pod \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.097744 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key\") pod \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.097796 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory\") pod \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\" (UID: \"815b1dea-8fed-47a0-bb79-5eb5bb428c34\") " Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.104242 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm" (OuterVolumeSpecName: "kube-api-access-zqfxm") pod "815b1dea-8fed-47a0-bb79-5eb5bb428c34" (UID: "815b1dea-8fed-47a0-bb79-5eb5bb428c34"). InnerVolumeSpecName "kube-api-access-zqfxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.142588 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory" (OuterVolumeSpecName: "inventory") pod "815b1dea-8fed-47a0-bb79-5eb5bb428c34" (UID: "815b1dea-8fed-47a0-bb79-5eb5bb428c34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.163194 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "815b1dea-8fed-47a0-bb79-5eb5bb428c34" (UID: "815b1dea-8fed-47a0-bb79-5eb5bb428c34"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.200864 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.201281 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/815b1dea-8fed-47a0-bb79-5eb5bb428c34-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.201297 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqfxm\" (UniqueName: \"kubernetes.io/projected/815b1dea-8fed-47a0-bb79-5eb5bb428c34-kube-api-access-zqfxm\") on node \"crc\" DevicePath \"\"" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.476053 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.540872 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" event={"ID":"815b1dea-8fed-47a0-bb79-5eb5bb428c34","Type":"ContainerDied","Data":"c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190"} Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.540910 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1876c84d91453d195f59cd27b331e6598c0e650eff62b7429b076e1ce92a190" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.540977 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.641013 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd"] Nov 24 12:59:15 crc kubenswrapper[4756]: E1124 12:59:15.641777 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815b1dea-8fed-47a0-bb79-5eb5bb428c34" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.641879 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="815b1dea-8fed-47a0-bb79-5eb5bb428c34" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.642212 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="815b1dea-8fed-47a0-bb79-5eb5bb428c34" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.643055 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.647290 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.647695 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.648827 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.649044 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.648838 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.649329 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.650088 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.655056 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.657556 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd"] Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.812893 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813241 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813381 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813663 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813780 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.813892 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814005 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814124 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814266 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814443 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814658 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bkk\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814806 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.814907 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916871 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916911 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916951 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.916998 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.917035 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918307 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918367 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78bkk\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918501 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918629 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.918681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.925268 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.926146 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.926264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.927318 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.928976 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.930208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.932103 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.933055 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.933831 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.934314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.934702 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.934925 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.941673 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.942740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78bkk\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:15 crc kubenswrapper[4756]: I1124 12:59:15.973837 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 12:59:16 crc kubenswrapper[4756]: I1124 12:59:16.561428 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd"] Nov 24 12:59:16 crc kubenswrapper[4756]: I1124 12:59:16.564030 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218"} Nov 24 12:59:17 crc kubenswrapper[4756]: I1124 12:59:17.574396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" event={"ID":"07b45682-fb34-44c2-8fa1-fcf25559773e","Type":"ContainerStarted","Data":"2419bf07df1d64d1fcd4f23e022c1cd52b2479d7ae285a8b853808c4ca78a5bb"} Nov 24 12:59:17 crc kubenswrapper[4756]: I1124 12:59:17.575128 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" event={"ID":"07b45682-fb34-44c2-8fa1-fcf25559773e","Type":"ContainerStarted","Data":"45ff9204ea561dd8d36a656256de6b30cb8e6898f8032fc97f2c080beb025cf9"} Nov 24 12:59:17 crc kubenswrapper[4756]: I1124 12:59:17.617130 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" podStartSLOduration=2.147974255 podStartE2EDuration="2.617100594s" podCreationTimestamp="2025-11-24 12:59:15 +0000 UTC" firstStartedPulling="2025-11-24 12:59:16.580695845 +0000 UTC m=+1888.938209997" lastFinishedPulling="2025-11-24 12:59:17.049822154 +0000 UTC m=+1889.407336336" observedRunningTime="2025-11-24 12:59:17.610044192 +0000 UTC m=+1889.967558354" watchObservedRunningTime="2025-11-24 12:59:17.617100594 +0000 UTC m=+1889.974614756" Nov 24 12:59:59 crc kubenswrapper[4756]: I1124 12:59:59.007819 4756 generic.go:334] "Generic (PLEG): container finished" podID="07b45682-fb34-44c2-8fa1-fcf25559773e" containerID="2419bf07df1d64d1fcd4f23e022c1cd52b2479d7ae285a8b853808c4ca78a5bb" exitCode=0 Nov 24 12:59:59 crc kubenswrapper[4756]: I1124 12:59:59.007906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" event={"ID":"07b45682-fb34-44c2-8fa1-fcf25559773e","Type":"ContainerDied","Data":"2419bf07df1d64d1fcd4f23e022c1cd52b2479d7ae285a8b853808c4ca78a5bb"} Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.157908 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn"] Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.160091 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.164942 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.165236 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.187896 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn"] Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.273385 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.273545 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhj8\" (UniqueName: \"kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.273577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.376470 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.376628 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbhj8\" (UniqueName: \"kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.376659 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.377440 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.384070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.396486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbhj8\" (UniqueName: \"kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8\") pod \"collect-profiles-29399820-fcqzn\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.488195 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.499085 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.585843 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.585942 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.585978 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586025 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586043 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586070 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586100 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586126 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586172 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78bkk\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586240 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586276 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586329 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586368 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.586408 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"07b45682-fb34-44c2-8fa1-fcf25559773e\" (UID: \"07b45682-fb34-44c2-8fa1-fcf25559773e\") " Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.595566 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.596102 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk" (OuterVolumeSpecName: "kube-api-access-78bkk") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "kube-api-access-78bkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.599780 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.631823 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.633405 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.636113 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.636196 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.636479 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.636561 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.637129 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.638298 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.639321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.644647 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.662750 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory" (OuterVolumeSpecName: "inventory") pod "07b45682-fb34-44c2-8fa1-fcf25559773e" (UID: "07b45682-fb34-44c2-8fa1-fcf25559773e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689584 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689617 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689631 4756 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689640 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689652 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689662 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689671 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689680 4756 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689689 4756 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689700 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78bkk\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-kube-api-access-78bkk\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689711 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689720 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689729 4756 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b45682-fb34-44c2-8fa1-fcf25559773e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.689738 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07b45682-fb34-44c2-8fa1-fcf25559773e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:00 crc kubenswrapper[4756]: I1124 13:00:00.991861 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn"] Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.066611 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" event={"ID":"ea2bf010-7ad0-430f-8a16-20dcfb150d38","Type":"ContainerStarted","Data":"a29f351f8e07ecb96e2d28faff065a7cca015f85745ded3e795d5e6676dbe3ba"} Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.080808 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" event={"ID":"07b45682-fb34-44c2-8fa1-fcf25559773e","Type":"ContainerDied","Data":"45ff9204ea561dd8d36a656256de6b30cb8e6898f8032fc97f2c080beb025cf9"} Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.080849 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45ff9204ea561dd8d36a656256de6b30cb8e6898f8032fc97f2c080beb025cf9" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.080914 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.190730 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr"] Nov 24 13:00:01 crc kubenswrapper[4756]: E1124 13:00:01.192220 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b45682-fb34-44c2-8fa1-fcf25559773e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.192301 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b45682-fb34-44c2-8fa1-fcf25559773e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.192605 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b45682-fb34-44c2-8fa1-fcf25559773e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.193424 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.197041 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.197347 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.200563 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.207004 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.207232 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.214872 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr"] Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.302571 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.303542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5tf5\" (UniqueName: \"kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.303701 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.303979 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.304377 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.407002 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.407056 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5tf5\" (UniqueName: \"kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.407087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.407121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.407150 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.410975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.412291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.412913 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.413620 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.426462 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5tf5\" (UniqueName: \"kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mp7tr\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:01 crc kubenswrapper[4756]: I1124 13:00:01.545715 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:00:02 crc kubenswrapper[4756]: I1124 13:00:02.091610 4756 generic.go:334] "Generic (PLEG): container finished" podID="ea2bf010-7ad0-430f-8a16-20dcfb150d38" containerID="e96ca971a28f4cd0759661f56ac934c75d6b41ffcb5a25de367a360f208d75ea" exitCode=0 Nov 24 13:00:02 crc kubenswrapper[4756]: I1124 13:00:02.091778 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" event={"ID":"ea2bf010-7ad0-430f-8a16-20dcfb150d38","Type":"ContainerDied","Data":"e96ca971a28f4cd0759661f56ac934c75d6b41ffcb5a25de367a360f208d75ea"} Nov 24 13:00:02 crc kubenswrapper[4756]: I1124 13:00:02.131381 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr"] Nov 24 13:00:02 crc kubenswrapper[4756]: W1124 13:00:02.139722 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a0f4dd_db57_4645_9b06_51c0416636f4.slice/crio-33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885 WatchSource:0}: Error finding container 33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885: Status 404 returned error can't find the container with id 33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885 Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.102630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" event={"ID":"e0a0f4dd-db57-4645-9b06-51c0416636f4","Type":"ContainerStarted","Data":"46021c65d8984fb369bd7293ed2aa549b2321b6d6456d51da1ef6f7206104140"} Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.103075 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" event={"ID":"e0a0f4dd-db57-4645-9b06-51c0416636f4","Type":"ContainerStarted","Data":"33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885"} Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.456570 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.554836 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume\") pod \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.554906 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbhj8\" (UniqueName: \"kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8\") pod \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.554970 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume\") pod \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\" (UID: \"ea2bf010-7ad0-430f-8a16-20dcfb150d38\") " Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.557733 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea2bf010-7ad0-430f-8a16-20dcfb150d38" (UID: "ea2bf010-7ad0-430f-8a16-20dcfb150d38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.562026 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8" (OuterVolumeSpecName: "kube-api-access-wbhj8") pod "ea2bf010-7ad0-430f-8a16-20dcfb150d38" (UID: "ea2bf010-7ad0-430f-8a16-20dcfb150d38"). InnerVolumeSpecName "kube-api-access-wbhj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.571526 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea2bf010-7ad0-430f-8a16-20dcfb150d38" (UID: "ea2bf010-7ad0-430f-8a16-20dcfb150d38"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.658291 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea2bf010-7ad0-430f-8a16-20dcfb150d38-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.658333 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbhj8\" (UniqueName: \"kubernetes.io/projected/ea2bf010-7ad0-430f-8a16-20dcfb150d38-kube-api-access-wbhj8\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:03 crc kubenswrapper[4756]: I1124 13:00:03.658345 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea2bf010-7ad0-430f-8a16-20dcfb150d38-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.118941 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.119099 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn" event={"ID":"ea2bf010-7ad0-430f-8a16-20dcfb150d38","Type":"ContainerDied","Data":"a29f351f8e07ecb96e2d28faff065a7cca015f85745ded3e795d5e6676dbe3ba"} Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.119138 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29f351f8e07ecb96e2d28faff065a7cca015f85745ded3e795d5e6676dbe3ba" Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.140567 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" podStartSLOduration=2.6316385909999997 podStartE2EDuration="3.140543127s" podCreationTimestamp="2025-11-24 13:00:01 +0000 UTC" firstStartedPulling="2025-11-24 13:00:02.141912435 +0000 UTC m=+1934.499426577" lastFinishedPulling="2025-11-24 13:00:02.650816971 +0000 UTC m=+1935.008331113" observedRunningTime="2025-11-24 13:00:04.140098245 +0000 UTC m=+1936.497612407" watchObservedRunningTime="2025-11-24 13:00:04.140543127 +0000 UTC m=+1936.498057269" Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.541186 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr"] Nov 24 13:00:04 crc kubenswrapper[4756]: I1124 13:00:04.555270 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399775-r6bnr"] Nov 24 13:00:06 crc kubenswrapper[4756]: I1124 13:00:06.491229 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="036136f7-02ff-449a-9367-0cf354821811" path="/var/lib/kubelet/pods/036136f7-02ff-449a-9367-0cf354821811/volumes" Nov 24 13:00:19 crc kubenswrapper[4756]: I1124 13:00:19.418410 4756 scope.go:117] "RemoveContainer" containerID="cbed0566e43b91329f5ea5dae931dc6ad1d7daec6e9c5c6fc1d0251cc43ab9b2" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.153540 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399821-rlf75"] Nov 24 13:01:00 crc kubenswrapper[4756]: E1124 13:01:00.154856 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2bf010-7ad0-430f-8a16-20dcfb150d38" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.154873 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2bf010-7ad0-430f-8a16-20dcfb150d38" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.155123 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2bf010-7ad0-430f-8a16-20dcfb150d38" containerName="collect-profiles" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.156000 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.164492 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-rlf75"] Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.345236 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.345291 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz672\" (UniqueName: \"kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.345374 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.345557 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.447349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.447439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.447580 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.447601 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz672\" (UniqueName: \"kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.455740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.456863 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.473402 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.490113 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz672\" (UniqueName: \"kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672\") pod \"keystone-cron-29399821-rlf75\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:00 crc kubenswrapper[4756]: I1124 13:01:00.782172 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:01 crc kubenswrapper[4756]: I1124 13:01:01.252587 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399821-rlf75"] Nov 24 13:01:01 crc kubenswrapper[4756]: I1124 13:01:01.698695 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-rlf75" event={"ID":"c6cdb570-24c3-419d-b75b-7bd66ec283a3","Type":"ContainerStarted","Data":"c426c95f67c856063367c299f94d092e37d5d96b374d0a49e37a2afcb1ff4d4c"} Nov 24 13:01:01 crc kubenswrapper[4756]: I1124 13:01:01.699111 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-rlf75" event={"ID":"c6cdb570-24c3-419d-b75b-7bd66ec283a3","Type":"ContainerStarted","Data":"b3793187ff71e521290366da84bd04d475ce2bfa3b9ee02d600d4d62933a7981"} Nov 24 13:01:01 crc kubenswrapper[4756]: I1124 13:01:01.716225 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399821-rlf75" podStartSLOduration=1.7162063619999999 podStartE2EDuration="1.716206362s" podCreationTimestamp="2025-11-24 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:01:01.714103135 +0000 UTC m=+1994.071617307" watchObservedRunningTime="2025-11-24 13:01:01.716206362 +0000 UTC m=+1994.073720494" Nov 24 13:01:03 crc kubenswrapper[4756]: I1124 13:01:03.720590 4756 generic.go:334] "Generic (PLEG): container finished" podID="c6cdb570-24c3-419d-b75b-7bd66ec283a3" containerID="c426c95f67c856063367c299f94d092e37d5d96b374d0a49e37a2afcb1ff4d4c" exitCode=0 Nov 24 13:01:03 crc kubenswrapper[4756]: I1124 13:01:03.720628 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-rlf75" event={"ID":"c6cdb570-24c3-419d-b75b-7bd66ec283a3","Type":"ContainerDied","Data":"c426c95f67c856063367c299f94d092e37d5d96b374d0a49e37a2afcb1ff4d4c"} Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.130827 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.244212 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys\") pod \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.244352 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") pod \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.244387 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data\") pod \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.244574 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz672\" (UniqueName: \"kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672\") pod \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.263931 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672" (OuterVolumeSpecName: "kube-api-access-fz672") pod "c6cdb570-24c3-419d-b75b-7bd66ec283a3" (UID: "c6cdb570-24c3-419d-b75b-7bd66ec283a3"). InnerVolumeSpecName "kube-api-access-fz672". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.273996 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c6cdb570-24c3-419d-b75b-7bd66ec283a3" (UID: "c6cdb570-24c3-419d-b75b-7bd66ec283a3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.348861 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6cdb570-24c3-419d-b75b-7bd66ec283a3" (UID: "c6cdb570-24c3-419d-b75b-7bd66ec283a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.349194 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") pod \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\" (UID: \"c6cdb570-24c3-419d-b75b-7bd66ec283a3\") " Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.349746 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz672\" (UniqueName: \"kubernetes.io/projected/c6cdb570-24c3-419d-b75b-7bd66ec283a3-kube-api-access-fz672\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.349772 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4756]: W1124 13:01:05.349885 4756 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c6cdb570-24c3-419d-b75b-7bd66ec283a3/volumes/kubernetes.io~secret/combined-ca-bundle Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.349905 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6cdb570-24c3-419d-b75b-7bd66ec283a3" (UID: "c6cdb570-24c3-419d-b75b-7bd66ec283a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.378858 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data" (OuterVolumeSpecName: "config-data") pod "c6cdb570-24c3-419d-b75b-7bd66ec283a3" (UID: "c6cdb570-24c3-419d-b75b-7bd66ec283a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.451700 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.451736 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cdb570-24c3-419d-b75b-7bd66ec283a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.739040 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399821-rlf75" event={"ID":"c6cdb570-24c3-419d-b75b-7bd66ec283a3","Type":"ContainerDied","Data":"b3793187ff71e521290366da84bd04d475ce2bfa3b9ee02d600d4d62933a7981"} Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.739094 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3793187ff71e521290366da84bd04d475ce2bfa3b9ee02d600d4d62933a7981" Nov 24 13:01:05 crc kubenswrapper[4756]: I1124 13:01:05.739099 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399821-rlf75" Nov 24 13:01:08 crc kubenswrapper[4756]: I1124 13:01:08.774556 4756 generic.go:334] "Generic (PLEG): container finished" podID="e0a0f4dd-db57-4645-9b06-51c0416636f4" containerID="46021c65d8984fb369bd7293ed2aa549b2321b6d6456d51da1ef6f7206104140" exitCode=0 Nov 24 13:01:08 crc kubenswrapper[4756]: I1124 13:01:08.774687 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" event={"ID":"e0a0f4dd-db57-4645-9b06-51c0416636f4","Type":"ContainerDied","Data":"46021c65d8984fb369bd7293ed2aa549b2321b6d6456d51da1ef6f7206104140"} Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.290393 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.380346 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle\") pod \"e0a0f4dd-db57-4645-9b06-51c0416636f4\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.382061 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5tf5\" (UniqueName: \"kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5\") pod \"e0a0f4dd-db57-4645-9b06-51c0416636f4\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.382310 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0\") pod \"e0a0f4dd-db57-4645-9b06-51c0416636f4\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.382446 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key\") pod \"e0a0f4dd-db57-4645-9b06-51c0416636f4\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.382818 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory\") pod \"e0a0f4dd-db57-4645-9b06-51c0416636f4\" (UID: \"e0a0f4dd-db57-4645-9b06-51c0416636f4\") " Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.388678 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5" (OuterVolumeSpecName: "kube-api-access-d5tf5") pod "e0a0f4dd-db57-4645-9b06-51c0416636f4" (UID: "e0a0f4dd-db57-4645-9b06-51c0416636f4"). InnerVolumeSpecName "kube-api-access-d5tf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.389595 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e0a0f4dd-db57-4645-9b06-51c0416636f4" (UID: "e0a0f4dd-db57-4645-9b06-51c0416636f4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.411471 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0a0f4dd-db57-4645-9b06-51c0416636f4" (UID: "e0a0f4dd-db57-4645-9b06-51c0416636f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.412931 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e0a0f4dd-db57-4645-9b06-51c0416636f4" (UID: "e0a0f4dd-db57-4645-9b06-51c0416636f4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.431319 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory" (OuterVolumeSpecName: "inventory") pod "e0a0f4dd-db57-4645-9b06-51c0416636f4" (UID: "e0a0f4dd-db57-4645-9b06-51c0416636f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.485985 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.486024 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.486035 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.486047 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5tf5\" (UniqueName: \"kubernetes.io/projected/e0a0f4dd-db57-4645-9b06-51c0416636f4-kube-api-access-d5tf5\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.486056 4756 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e0a0f4dd-db57-4645-9b06-51c0416636f4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.799900 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" event={"ID":"e0a0f4dd-db57-4645-9b06-51c0416636f4","Type":"ContainerDied","Data":"33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885"} Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.799959 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33cb567298b2a4ca45b538b3ea7d3c34bc520ac09c7a5a85df7d287fca128885" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.800037 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mp7tr" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.915711 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6"] Nov 24 13:01:10 crc kubenswrapper[4756]: E1124 13:01:10.916721 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cdb570-24c3-419d-b75b-7bd66ec283a3" containerName="keystone-cron" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.916765 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cdb570-24c3-419d-b75b-7bd66ec283a3" containerName="keystone-cron" Nov 24 13:01:10 crc kubenswrapper[4756]: E1124 13:01:10.916798 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a0f4dd-db57-4645-9b06-51c0416636f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.916816 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a0f4dd-db57-4645-9b06-51c0416636f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.917364 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a0f4dd-db57-4645-9b06-51c0416636f4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.917461 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cdb570-24c3-419d-b75b-7bd66ec283a3" containerName="keystone-cron" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.919062 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.922414 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.923100 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.923206 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.923378 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.923569 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.926753 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6"] Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.927059 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.995558 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.995713 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.995767 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.995821 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.995997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5jd\" (UniqueName: \"kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:10 crc kubenswrapper[4756]: I1124 13:01:10.996124 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.097812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.097961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.098012 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.098066 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.098095 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5jd\" (UniqueName: \"kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.098126 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.103379 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.103515 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.103974 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.103979 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.107755 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.114252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5jd\" (UniqueName: \"kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.239034 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:01:11 crc kubenswrapper[4756]: I1124 13:01:11.860901 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6"] Nov 24 13:01:11 crc kubenswrapper[4756]: W1124 13:01:11.869570 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf15818_9c96_4bbe_bb89_6d26aff5bfbe.slice/crio-f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3 WatchSource:0}: Error finding container f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3: Status 404 returned error can't find the container with id f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3 Nov 24 13:01:12 crc kubenswrapper[4756]: I1124 13:01:12.822691 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" event={"ID":"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe","Type":"ContainerStarted","Data":"1822cd7299b5fcfd81e3206569e7010b10d3dd15ca8bf73bca061bde800ea74f"} Nov 24 13:01:12 crc kubenswrapper[4756]: I1124 13:01:12.823582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" event={"ID":"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe","Type":"ContainerStarted","Data":"f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3"} Nov 24 13:01:12 crc kubenswrapper[4756]: I1124 13:01:12.844246 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" podStartSLOduration=2.260945565 podStartE2EDuration="2.844213362s" podCreationTimestamp="2025-11-24 13:01:10 +0000 UTC" firstStartedPulling="2025-11-24 13:01:11.873542359 +0000 UTC m=+2004.231056501" lastFinishedPulling="2025-11-24 13:01:12.456810116 +0000 UTC m=+2004.814324298" observedRunningTime="2025-11-24 13:01:12.843670407 +0000 UTC m=+2005.201184549" watchObservedRunningTime="2025-11-24 13:01:12.844213362 +0000 UTC m=+2005.201727544" Nov 24 13:01:19 crc kubenswrapper[4756]: I1124 13:01:19.516800 4756 scope.go:117] "RemoveContainer" containerID="294027209fb20536ae73768c3193cf44816a76f3d188e99fef4b1000b99dda5c" Nov 24 13:01:19 crc kubenswrapper[4756]: I1124 13:01:19.548757 4756 scope.go:117] "RemoveContainer" containerID="115c82c4b4d20b4d8b11d321a362d252a647c72804063dcd674e950a1ea6ddc1" Nov 24 13:01:19 crc kubenswrapper[4756]: I1124 13:01:19.575448 4756 scope.go:117] "RemoveContainer" containerID="69d39de08aff4da3996a76a5245009586e21117d95af784474f1bc31a3b74ce9" Nov 24 13:01:33 crc kubenswrapper[4756]: I1124 13:01:33.478874 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:01:33 crc kubenswrapper[4756]: I1124 13:01:33.479613 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.130038 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.139460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.157379 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.316942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.317385 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.317507 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmmj\" (UniqueName: \"kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.419056 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.419216 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.419250 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmmj\" (UniqueName: \"kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.419791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.420034 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.447355 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmmj\" (UniqueName: \"kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj\") pod \"certified-operators-jb4fj\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.474767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:46 crc kubenswrapper[4756]: I1124 13:01:46.982810 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:01:47 crc kubenswrapper[4756]: I1124 13:01:47.195965 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerStarted","Data":"e92ccfd29f583b15a87eda54d2ec0ac491dde3ac33950eb014176833a4df7598"} Nov 24 13:01:48 crc kubenswrapper[4756]: I1124 13:01:48.210506 4756 generic.go:334] "Generic (PLEG): container finished" podID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerID="548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707" exitCode=0 Nov 24 13:01:48 crc kubenswrapper[4756]: I1124 13:01:48.210586 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerDied","Data":"548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707"} Nov 24 13:01:49 crc kubenswrapper[4756]: I1124 13:01:49.221076 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerStarted","Data":"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc"} Nov 24 13:01:50 crc kubenswrapper[4756]: I1124 13:01:50.238861 4756 generic.go:334] "Generic (PLEG): container finished" podID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerID="4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc" exitCode=0 Nov 24 13:01:50 crc kubenswrapper[4756]: I1124 13:01:50.238987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerDied","Data":"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc"} Nov 24 13:01:51 crc kubenswrapper[4756]: I1124 13:01:51.254562 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerStarted","Data":"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb"} Nov 24 13:01:51 crc kubenswrapper[4756]: I1124 13:01:51.282332 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jb4fj" podStartSLOduration=2.623690568 podStartE2EDuration="5.282301694s" podCreationTimestamp="2025-11-24 13:01:46 +0000 UTC" firstStartedPulling="2025-11-24 13:01:48.213120119 +0000 UTC m=+2040.570634261" lastFinishedPulling="2025-11-24 13:01:50.871731245 +0000 UTC m=+2043.229245387" observedRunningTime="2025-11-24 13:01:51.270948484 +0000 UTC m=+2043.628462646" watchObservedRunningTime="2025-11-24 13:01:51.282301694 +0000 UTC m=+2043.639815836" Nov 24 13:01:56 crc kubenswrapper[4756]: I1124 13:01:56.495842 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:56 crc kubenswrapper[4756]: I1124 13:01:56.496677 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:56 crc kubenswrapper[4756]: I1124 13:01:56.542733 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:57 crc kubenswrapper[4756]: I1124 13:01:57.371497 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:01:57 crc kubenswrapper[4756]: I1124 13:01:57.432229 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:01:59 crc kubenswrapper[4756]: I1124 13:01:59.337991 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jb4fj" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="registry-server" containerID="cri-o://68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb" gracePeriod=2 Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.326463 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.352230 4756 generic.go:334] "Generic (PLEG): container finished" podID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerID="68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb" exitCode=0 Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.352290 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerDied","Data":"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb"} Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.352344 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jb4fj" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.352370 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jb4fj" event={"ID":"34f550fa-7cc4-45a4-a624-5524ec2255ae","Type":"ContainerDied","Data":"e92ccfd29f583b15a87eda54d2ec0ac491dde3ac33950eb014176833a4df7598"} Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.352401 4756 scope.go:117] "RemoveContainer" containerID="68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.381953 4756 scope.go:117] "RemoveContainer" containerID="4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.410500 4756 scope.go:117] "RemoveContainer" containerID="548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.428567 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content\") pod \"34f550fa-7cc4-45a4-a624-5524ec2255ae\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.428645 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmmj\" (UniqueName: \"kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj\") pod \"34f550fa-7cc4-45a4-a624-5524ec2255ae\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.428698 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities\") pod \"34f550fa-7cc4-45a4-a624-5524ec2255ae\" (UID: \"34f550fa-7cc4-45a4-a624-5524ec2255ae\") " Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.430258 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities" (OuterVolumeSpecName: "utilities") pod "34f550fa-7cc4-45a4-a624-5524ec2255ae" (UID: "34f550fa-7cc4-45a4-a624-5524ec2255ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.437617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj" (OuterVolumeSpecName: "kube-api-access-ssmmj") pod "34f550fa-7cc4-45a4-a624-5524ec2255ae" (UID: "34f550fa-7cc4-45a4-a624-5524ec2255ae"). InnerVolumeSpecName "kube-api-access-ssmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.457013 4756 scope.go:117] "RemoveContainer" containerID="68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb" Nov 24 13:02:00 crc kubenswrapper[4756]: E1124 13:02:00.457554 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb\": container with ID starting with 68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb not found: ID does not exist" containerID="68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.457782 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb"} err="failed to get container status \"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb\": rpc error: code = NotFound desc = could not find container \"68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb\": container with ID starting with 68dcf8c045138f1be741de414e910ba112e99750de3f83702fe2e603e8a09deb not found: ID does not exist" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.458179 4756 scope.go:117] "RemoveContainer" containerID="4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc" Nov 24 13:02:00 crc kubenswrapper[4756]: E1124 13:02:00.458873 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc\": container with ID starting with 4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc not found: ID does not exist" containerID="4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.458924 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc"} err="failed to get container status \"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc\": rpc error: code = NotFound desc = could not find container \"4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc\": container with ID starting with 4e7978232dc9780d6ec8e70c783166ff97f342a8bfdca5b15019599d8e1497bc not found: ID does not exist" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.458955 4756 scope.go:117] "RemoveContainer" containerID="548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707" Nov 24 13:02:00 crc kubenswrapper[4756]: E1124 13:02:00.459315 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707\": container with ID starting with 548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707 not found: ID does not exist" containerID="548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.459348 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707"} err="failed to get container status \"548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707\": rpc error: code = NotFound desc = could not find container \"548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707\": container with ID starting with 548d76fad73740f1835d1b11c7723174cec8d1e06a2c43bb5da2bbc5823ae707 not found: ID does not exist" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.475248 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34f550fa-7cc4-45a4-a624-5524ec2255ae" (UID: "34f550fa-7cc4-45a4-a624-5524ec2255ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.531061 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.531097 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssmmj\" (UniqueName: \"kubernetes.io/projected/34f550fa-7cc4-45a4-a624-5524ec2255ae-kube-api-access-ssmmj\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.531108 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34f550fa-7cc4-45a4-a624-5524ec2255ae-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.677619 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:02:00 crc kubenswrapper[4756]: I1124 13:02:00.684777 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jb4fj"] Nov 24 13:02:02 crc kubenswrapper[4756]: I1124 13:02:02.491925 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" path="/var/lib/kubelet/pods/34f550fa-7cc4-45a4-a624-5524ec2255ae/volumes" Nov 24 13:02:03 crc kubenswrapper[4756]: I1124 13:02:03.478763 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:02:03 crc kubenswrapper[4756]: I1124 13:02:03.479287 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:02:03 crc kubenswrapper[4756]: E1124 13:02:03.641452 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf15818_9c96_4bbe_bb89_6d26aff5bfbe.slice/crio-1822cd7299b5fcfd81e3206569e7010b10d3dd15ca8bf73bca061bde800ea74f.scope\": RecentStats: unable to find data in memory cache]" Nov 24 13:02:04 crc kubenswrapper[4756]: I1124 13:02:04.410968 4756 generic.go:334] "Generic (PLEG): container finished" podID="7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" containerID="1822cd7299b5fcfd81e3206569e7010b10d3dd15ca8bf73bca061bde800ea74f" exitCode=0 Nov 24 13:02:04 crc kubenswrapper[4756]: I1124 13:02:04.411032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" event={"ID":"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe","Type":"ContainerDied","Data":"1822cd7299b5fcfd81e3206569e7010b10d3dd15ca8bf73bca061bde800ea74f"} Nov 24 13:02:05 crc kubenswrapper[4756]: I1124 13:02:05.929329 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047479 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047684 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th5jd\" (UniqueName: \"kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047728 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047799 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.047895 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0\") pod \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\" (UID: \"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe\") " Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.055840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd" (OuterVolumeSpecName: "kube-api-access-th5jd") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "kube-api-access-th5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.060391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.079368 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.085699 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory" (OuterVolumeSpecName: "inventory") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.093677 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.095588 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" (UID: "7cf15818-9c96-4bbe-bb89-6d26aff5bfbe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.151028 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th5jd\" (UniqueName: \"kubernetes.io/projected/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-kube-api-access-th5jd\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.151274 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.151510 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.151809 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.152020 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.152226 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf15818-9c96-4bbe-bb89-6d26aff5bfbe-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.435225 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" event={"ID":"7cf15818-9c96-4bbe-bb89-6d26aff5bfbe","Type":"ContainerDied","Data":"f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3"} Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.435281 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1653d14650dac28e90f48e7ffbad2b15699d350cdee6a5f1992fbb4ef8c52f3" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.435991 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.655442 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh"] Nov 24 13:02:06 crc kubenswrapper[4756]: E1124 13:02:06.656048 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="registry-server" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656082 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="registry-server" Nov 24 13:02:06 crc kubenswrapper[4756]: E1124 13:02:06.656100 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656114 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 13:02:06 crc kubenswrapper[4756]: E1124 13:02:06.656136 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="extract-utilities" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656146 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="extract-utilities" Nov 24 13:02:06 crc kubenswrapper[4756]: E1124 13:02:06.656295 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="extract-content" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656310 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="extract-content" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656672 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f550fa-7cc4-45a4-a624-5524ec2255ae" containerName="registry-server" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.656707 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf15818-9c96-4bbe-bb89-6d26aff5bfbe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.657700 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.660505 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.662784 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.663116 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.666440 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.669427 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh"] Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.669724 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.763672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96jc\" (UniqueName: \"kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.764005 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.764111 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.764208 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.764335 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.866411 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t96jc\" (UniqueName: \"kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.866467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.866488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.867351 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.867417 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.871152 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.871874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.872415 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.885357 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.889213 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t96jc\" (UniqueName: \"kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cnghh\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:06 crc kubenswrapper[4756]: I1124 13:02:06.978637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:02:07 crc kubenswrapper[4756]: I1124 13:02:07.571864 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh"] Nov 24 13:02:08 crc kubenswrapper[4756]: I1124 13:02:08.454762 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" event={"ID":"9956cbef-8286-4c85-9c91-4e476d82d3d9","Type":"ContainerStarted","Data":"5a01d7fb6b5dbf38fac0e0296ae16ab55c0b533ebc1b19f2540983c8e6b8de72"} Nov 24 13:02:08 crc kubenswrapper[4756]: I1124 13:02:08.455110 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" event={"ID":"9956cbef-8286-4c85-9c91-4e476d82d3d9","Type":"ContainerStarted","Data":"a0e03a9a375dd034bca280f8c0dc39367389bc021522ffb218f3292b486b4bbd"} Nov 24 13:02:08 crc kubenswrapper[4756]: I1124 13:02:08.480912 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" podStartSLOduration=1.93149615 podStartE2EDuration="2.480883392s" podCreationTimestamp="2025-11-24 13:02:06 +0000 UTC" firstStartedPulling="2025-11-24 13:02:07.584370496 +0000 UTC m=+2059.941884638" lastFinishedPulling="2025-11-24 13:02:08.133757738 +0000 UTC m=+2060.491271880" observedRunningTime="2025-11-24 13:02:08.471406553 +0000 UTC m=+2060.828920715" watchObservedRunningTime="2025-11-24 13:02:08.480883392 +0000 UTC m=+2060.838397544" Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.479095 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.480396 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.480444 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.480826 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.480872 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218" gracePeriod=600 Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.741040 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218" exitCode=0 Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.741597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218"} Nov 24 13:02:33 crc kubenswrapper[4756]: I1124 13:02:33.741657 4756 scope.go:117] "RemoveContainer" containerID="bbc202a03ccb257532046c5baaec8aa3d01298e2789e46b5bdfda973609708eb" Nov 24 13:02:34 crc kubenswrapper[4756]: I1124 13:02:34.753229 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3"} Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.559448 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.565429 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.572746 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.587013 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lchvg\" (UniqueName: \"kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.587096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.587347 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.689180 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.689387 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lchvg\" (UniqueName: \"kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.689414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.690041 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.690366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.710434 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lchvg\" (UniqueName: \"kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg\") pod \"redhat-operators-mhvgk\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:28 crc kubenswrapper[4756]: I1124 13:04:28.906906 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:29 crc kubenswrapper[4756]: I1124 13:04:29.452643 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:04:29 crc kubenswrapper[4756]: I1124 13:04:29.567024 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerStarted","Data":"14e8d0f9aff8586d603998edde1dbc995f3b529ca507debec03b4777c3b6c5c1"} Nov 24 13:04:30 crc kubenswrapper[4756]: I1124 13:04:30.580486 4756 generic.go:334] "Generic (PLEG): container finished" podID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerID="e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886" exitCode=0 Nov 24 13:04:30 crc kubenswrapper[4756]: I1124 13:04:30.580597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerDied","Data":"e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886"} Nov 24 13:04:30 crc kubenswrapper[4756]: I1124 13:04:30.583959 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:04:33 crc kubenswrapper[4756]: I1124 13:04:33.479931 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:04:33 crc kubenswrapper[4756]: I1124 13:04:33.480973 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:04:38 crc kubenswrapper[4756]: I1124 13:04:38.658112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerStarted","Data":"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4"} Nov 24 13:04:40 crc kubenswrapper[4756]: I1124 13:04:40.684964 4756 generic.go:334] "Generic (PLEG): container finished" podID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerID="7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4" exitCode=0 Nov 24 13:04:40 crc kubenswrapper[4756]: I1124 13:04:40.685038 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerDied","Data":"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4"} Nov 24 13:04:41 crc kubenswrapper[4756]: I1124 13:04:41.703133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerStarted","Data":"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6"} Nov 24 13:04:48 crc kubenswrapper[4756]: I1124 13:04:48.907598 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:48 crc kubenswrapper[4756]: I1124 13:04:48.908422 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:50 crc kubenswrapper[4756]: I1124 13:04:50.008095 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhvgk" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="registry-server" probeResult="failure" output=< Nov 24 13:04:50 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:04:50 crc kubenswrapper[4756]: > Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.372366 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mhvgk" podStartSLOduration=15.525585457 podStartE2EDuration="26.372344113s" podCreationTimestamp="2025-11-24 13:04:28 +0000 UTC" firstStartedPulling="2025-11-24 13:04:30.583714243 +0000 UTC m=+2202.941228385" lastFinishedPulling="2025-11-24 13:04:41.430472889 +0000 UTC m=+2213.787987041" observedRunningTime="2025-11-24 13:04:41.724567489 +0000 UTC m=+2214.082081641" watchObservedRunningTime="2025-11-24 13:04:54.372344113 +0000 UTC m=+2226.729858255" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.379781 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.382136 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.426284 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.501445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.501541 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.501844 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn756\" (UniqueName: \"kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.604104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn756\" (UniqueName: \"kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.604252 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.604306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.605278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.605303 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.628018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn756\" (UniqueName: \"kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756\") pod \"community-operators-fz628\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:54 crc kubenswrapper[4756]: I1124 13:04:54.705469 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:04:55 crc kubenswrapper[4756]: I1124 13:04:55.341388 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:04:55 crc kubenswrapper[4756]: I1124 13:04:55.927408 4756 generic.go:334] "Generic (PLEG): container finished" podID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerID="2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11" exitCode=0 Nov 24 13:04:55 crc kubenswrapper[4756]: I1124 13:04:55.927513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerDied","Data":"2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11"} Nov 24 13:04:55 crc kubenswrapper[4756]: I1124 13:04:55.929547 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerStarted","Data":"4b227b3cccbdf475a9757c83efcb721af8fa129b4268b30209e303a744ad3471"} Nov 24 13:04:56 crc kubenswrapper[4756]: I1124 13:04:56.939817 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerStarted","Data":"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6"} Nov 24 13:04:57 crc kubenswrapper[4756]: I1124 13:04:57.952823 4756 generic.go:334] "Generic (PLEG): container finished" podID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerID="de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6" exitCode=0 Nov 24 13:04:57 crc kubenswrapper[4756]: I1124 13:04:57.952903 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerDied","Data":"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6"} Nov 24 13:04:58 crc kubenswrapper[4756]: I1124 13:04:58.970993 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerStarted","Data":"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476"} Nov 24 13:04:58 crc kubenswrapper[4756]: I1124 13:04:58.991751 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:04:59 crc kubenswrapper[4756]: I1124 13:04:59.025818 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fz628" podStartSLOduration=2.585390597 podStartE2EDuration="5.025791076s" podCreationTimestamp="2025-11-24 13:04:54 +0000 UTC" firstStartedPulling="2025-11-24 13:04:55.931854889 +0000 UTC m=+2228.289369071" lastFinishedPulling="2025-11-24 13:04:58.372255398 +0000 UTC m=+2230.729769550" observedRunningTime="2025-11-24 13:04:59.001751099 +0000 UTC m=+2231.359265281" watchObservedRunningTime="2025-11-24 13:04:59.025791076 +0000 UTC m=+2231.383305228" Nov 24 13:04:59 crc kubenswrapper[4756]: I1124 13:04:59.049094 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:05:00 crc kubenswrapper[4756]: I1124 13:05:00.380592 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:05:00 crc kubenswrapper[4756]: I1124 13:05:00.754477 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 13:05:00 crc kubenswrapper[4756]: I1124 13:05:00.755013 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbh8f" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="registry-server" containerID="cri-o://16f8604364752474f86d8e0d224a02c4e3b607dd4fe6f95011b37d145484a306" gracePeriod=2 Nov 24 13:05:00 crc kubenswrapper[4756]: I1124 13:05:00.992481 4756 generic.go:334] "Generic (PLEG): container finished" podID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerID="16f8604364752474f86d8e0d224a02c4e3b607dd4fe6f95011b37d145484a306" exitCode=0 Nov 24 13:05:00 crc kubenswrapper[4756]: I1124 13:05:00.992589 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerDied","Data":"16f8604364752474f86d8e0d224a02c4e3b607dd4fe6f95011b37d145484a306"} Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.223544 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.352317 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities\") pod \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.352471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zssqs\" (UniqueName: \"kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs\") pod \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.352685 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content\") pod \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\" (UID: \"67e23dac-91d1-47d0-9ae7-96ee82cd8749\") " Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.355663 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities" (OuterVolumeSpecName: "utilities") pod "67e23dac-91d1-47d0-9ae7-96ee82cd8749" (UID: "67e23dac-91d1-47d0-9ae7-96ee82cd8749"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.360112 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs" (OuterVolumeSpecName: "kube-api-access-zssqs") pod "67e23dac-91d1-47d0-9ae7-96ee82cd8749" (UID: "67e23dac-91d1-47d0-9ae7-96ee82cd8749"). InnerVolumeSpecName "kube-api-access-zssqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.437291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e23dac-91d1-47d0-9ae7-96ee82cd8749" (UID: "67e23dac-91d1-47d0-9ae7-96ee82cd8749"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.455184 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.455218 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zssqs\" (UniqueName: \"kubernetes.io/projected/67e23dac-91d1-47d0-9ae7-96ee82cd8749-kube-api-access-zssqs\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:01 crc kubenswrapper[4756]: I1124 13:05:01.455234 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e23dac-91d1-47d0-9ae7-96ee82cd8749-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.014067 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbh8f" event={"ID":"67e23dac-91d1-47d0-9ae7-96ee82cd8749","Type":"ContainerDied","Data":"3c2b3e925d1328bf6b3d41e600aca3fe6b31649e1dca2d2cd13fa2957691bad8"} Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.014496 4756 scope.go:117] "RemoveContainer" containerID="16f8604364752474f86d8e0d224a02c4e3b607dd4fe6f95011b37d145484a306" Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.014143 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbh8f" Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.053998 4756 scope.go:117] "RemoveContainer" containerID="f616cb7c1d6f4cab43ebf0fdb6aefe95042c38b27fb3b3d20f9df14800242ab5" Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.055373 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.071451 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbh8f"] Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.096646 4756 scope.go:117] "RemoveContainer" containerID="a9d4f9c6e9a9463e7710c8e73df8d3b7eae7322a84c6d3b71cc908011f566f5e" Nov 24 13:05:02 crc kubenswrapper[4756]: I1124 13:05:02.488232 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" path="/var/lib/kubelet/pods/67e23dac-91d1-47d0-9ae7-96ee82cd8749/volumes" Nov 24 13:05:03 crc kubenswrapper[4756]: I1124 13:05:03.479408 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:05:03 crc kubenswrapper[4756]: I1124 13:05:03.479463 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:05:04 crc kubenswrapper[4756]: I1124 13:05:04.706638 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:04 crc kubenswrapper[4756]: I1124 13:05:04.707289 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:04 crc kubenswrapper[4756]: I1124 13:05:04.772776 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:05 crc kubenswrapper[4756]: I1124 13:05:05.090218 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.169096 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.170430 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fz628" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="registry-server" containerID="cri-o://7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476" gracePeriod=2 Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.673448 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.799780 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn756\" (UniqueName: \"kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756\") pod \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.799841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities\") pod \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.799956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content\") pod \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\" (UID: \"00d6850b-0a93-4020-93da-21aa7aaa8ff0\") " Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.802059 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities" (OuterVolumeSpecName: "utilities") pod "00d6850b-0a93-4020-93da-21aa7aaa8ff0" (UID: "00d6850b-0a93-4020-93da-21aa7aaa8ff0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.809135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756" (OuterVolumeSpecName: "kube-api-access-jn756") pod "00d6850b-0a93-4020-93da-21aa7aaa8ff0" (UID: "00d6850b-0a93-4020-93da-21aa7aaa8ff0"). InnerVolumeSpecName "kube-api-access-jn756". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.902772 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn756\" (UniqueName: \"kubernetes.io/projected/00d6850b-0a93-4020-93da-21aa7aaa8ff0-kube-api-access-jn756\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:07 crc kubenswrapper[4756]: I1124 13:05:07.902820 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.074786 4756 generic.go:334] "Generic (PLEG): container finished" podID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerID="7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476" exitCode=0 Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.074829 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fz628" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.074856 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerDied","Data":"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476"} Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.074914 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fz628" event={"ID":"00d6850b-0a93-4020-93da-21aa7aaa8ff0","Type":"ContainerDied","Data":"4b227b3cccbdf475a9757c83efcb721af8fa129b4268b30209e303a744ad3471"} Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.074948 4756 scope.go:117] "RemoveContainer" containerID="7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.110645 4756 scope.go:117] "RemoveContainer" containerID="de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.151634 4756 scope.go:117] "RemoveContainer" containerID="2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.207218 4756 scope.go:117] "RemoveContainer" containerID="7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476" Nov 24 13:05:08 crc kubenswrapper[4756]: E1124 13:05:08.207641 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476\": container with ID starting with 7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476 not found: ID does not exist" containerID="7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.207669 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476"} err="failed to get container status \"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476\": rpc error: code = NotFound desc = could not find container \"7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476\": container with ID starting with 7f84cba91d20694c3bfce0fd7eab16cb9d28eb7bb1869c2f93ff0d8e8b188476 not found: ID does not exist" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.207689 4756 scope.go:117] "RemoveContainer" containerID="de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6" Nov 24 13:05:08 crc kubenswrapper[4756]: E1124 13:05:08.207904 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6\": container with ID starting with de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6 not found: ID does not exist" containerID="de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.207935 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6"} err="failed to get container status \"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6\": rpc error: code = NotFound desc = could not find container \"de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6\": container with ID starting with de8cb928906813c564acd71f9a00a34e887713b46de4f7594d9e855b36149ba6 not found: ID does not exist" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.207956 4756 scope.go:117] "RemoveContainer" containerID="2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11" Nov 24 13:05:08 crc kubenswrapper[4756]: E1124 13:05:08.208282 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11\": container with ID starting with 2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11 not found: ID does not exist" containerID="2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.208344 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11"} err="failed to get container status \"2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11\": rpc error: code = NotFound desc = could not find container \"2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11\": container with ID starting with 2ecafe9fdd891cace9b437dd38163dccfcfaf6eb6a2440d79b490471ba9b0a11 not found: ID does not exist" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.532690 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00d6850b-0a93-4020-93da-21aa7aaa8ff0" (UID: "00d6850b-0a93-4020-93da-21aa7aaa8ff0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.616592 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6850b-0a93-4020-93da-21aa7aaa8ff0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.735659 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:05:08 crc kubenswrapper[4756]: I1124 13:05:08.751877 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fz628"] Nov 24 13:05:10 crc kubenswrapper[4756]: I1124 13:05:10.491081 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" path="/var/lib/kubelet/pods/00d6850b-0a93-4020-93da-21aa7aaa8ff0/volumes" Nov 24 13:05:33 crc kubenswrapper[4756]: I1124 13:05:33.478889 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:05:33 crc kubenswrapper[4756]: I1124 13:05:33.479681 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:05:33 crc kubenswrapper[4756]: I1124 13:05:33.479736 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:05:33 crc kubenswrapper[4756]: I1124 13:05:33.480627 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:05:33 crc kubenswrapper[4756]: I1124 13:05:33.480683 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" gracePeriod=600 Nov 24 13:05:33 crc kubenswrapper[4756]: E1124 13:05:33.614555 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:05:34 crc kubenswrapper[4756]: I1124 13:05:34.372922 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" exitCode=0 Nov 24 13:05:34 crc kubenswrapper[4756]: I1124 13:05:34.373049 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3"} Nov 24 13:05:34 crc kubenswrapper[4756]: I1124 13:05:34.373470 4756 scope.go:117] "RemoveContainer" containerID="0c7d2fa4a438e2a051f122ae687f3c270b68a89cf1dc7e0bd17effe9f131a218" Nov 24 13:05:34 crc kubenswrapper[4756]: I1124 13:05:34.374715 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:05:34 crc kubenswrapper[4756]: E1124 13:05:34.375395 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:05:48 crc kubenswrapper[4756]: I1124 13:05:48.483764 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:05:48 crc kubenswrapper[4756]: E1124 13:05:48.484889 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:00 crc kubenswrapper[4756]: I1124 13:06:00.476437 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:06:00 crc kubenswrapper[4756]: E1124 13:06:00.477528 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:11 crc kubenswrapper[4756]: I1124 13:06:11.477041 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:06:11 crc kubenswrapper[4756]: E1124 13:06:11.479888 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:25 crc kubenswrapper[4756]: I1124 13:06:25.476271 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:06:25 crc kubenswrapper[4756]: E1124 13:06:25.477510 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:39 crc kubenswrapper[4756]: I1124 13:06:39.476028 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:06:39 crc kubenswrapper[4756]: E1124 13:06:39.476969 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.632003 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633446 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633470 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633505 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="extract-utilities" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633517 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="extract-utilities" Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633561 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="extract-content" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633573 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="extract-content" Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633599 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="extract-content" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633609 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="extract-content" Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633635 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="extract-utilities" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633647 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="extract-utilities" Nov 24 13:06:44 crc kubenswrapper[4756]: E1124 13:06:44.633659 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633670 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633965 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d6850b-0a93-4020-93da-21aa7aaa8ff0" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.633991 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e23dac-91d1-47d0-9ae7-96ee82cd8749" containerName="registry-server" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.636413 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.644326 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.770662 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.770699 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzpg\" (UniqueName: \"kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.771115 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.872707 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.872803 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.872826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzpg\" (UniqueName: \"kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.873279 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.873296 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.898932 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzpg\" (UniqueName: \"kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg\") pod \"redhat-marketplace-jxr7d\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:44 crc kubenswrapper[4756]: I1124 13:06:44.974858 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:45 crc kubenswrapper[4756]: I1124 13:06:45.142203 4756 generic.go:334] "Generic (PLEG): container finished" podID="9956cbef-8286-4c85-9c91-4e476d82d3d9" containerID="5a01d7fb6b5dbf38fac0e0296ae16ab55c0b533ebc1b19f2540983c8e6b8de72" exitCode=0 Nov 24 13:06:45 crc kubenswrapper[4756]: I1124 13:06:45.142237 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" event={"ID":"9956cbef-8286-4c85-9c91-4e476d82d3d9","Type":"ContainerDied","Data":"5a01d7fb6b5dbf38fac0e0296ae16ab55c0b533ebc1b19f2540983c8e6b8de72"} Nov 24 13:06:45 crc kubenswrapper[4756]: I1124 13:06:45.487290 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.155444 4756 generic.go:334] "Generic (PLEG): container finished" podID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerID="6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f" exitCode=0 Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.155535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerDied","Data":"6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f"} Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.155619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerStarted","Data":"a292de6b7c21fd6bce21f8862489b1aa19e3d690a04919cbdca075a547e2ba74"} Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.591917 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.711135 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory\") pod \"9956cbef-8286-4c85-9c91-4e476d82d3d9\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.711216 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t96jc\" (UniqueName: \"kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc\") pod \"9956cbef-8286-4c85-9c91-4e476d82d3d9\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.711270 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key\") pod \"9956cbef-8286-4c85-9c91-4e476d82d3d9\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.711342 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0\") pod \"9956cbef-8286-4c85-9c91-4e476d82d3d9\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.711405 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle\") pod \"9956cbef-8286-4c85-9c91-4e476d82d3d9\" (UID: \"9956cbef-8286-4c85-9c91-4e476d82d3d9\") " Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.723852 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc" (OuterVolumeSpecName: "kube-api-access-t96jc") pod "9956cbef-8286-4c85-9c91-4e476d82d3d9" (UID: "9956cbef-8286-4c85-9c91-4e476d82d3d9"). InnerVolumeSpecName "kube-api-access-t96jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.726472 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9956cbef-8286-4c85-9c91-4e476d82d3d9" (UID: "9956cbef-8286-4c85-9c91-4e476d82d3d9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.769440 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9956cbef-8286-4c85-9c91-4e476d82d3d9" (UID: "9956cbef-8286-4c85-9c91-4e476d82d3d9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.775113 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9956cbef-8286-4c85-9c91-4e476d82d3d9" (UID: "9956cbef-8286-4c85-9c91-4e476d82d3d9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.776949 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory" (OuterVolumeSpecName: "inventory") pod "9956cbef-8286-4c85-9c91-4e476d82d3d9" (UID: "9956cbef-8286-4c85-9c91-4e476d82d3d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.816110 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.816208 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t96jc\" (UniqueName: \"kubernetes.io/projected/9956cbef-8286-4c85-9c91-4e476d82d3d9-kube-api-access-t96jc\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.816231 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.816248 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:46 crc kubenswrapper[4756]: I1124 13:06:46.816267 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9956cbef-8286-4c85-9c91-4e476d82d3d9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.166872 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" event={"ID":"9956cbef-8286-4c85-9c91-4e476d82d3d9","Type":"ContainerDied","Data":"a0e03a9a375dd034bca280f8c0dc39367389bc021522ffb218f3292b486b4bbd"} Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.166929 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e03a9a375dd034bca280f8c0dc39367389bc021522ffb218f3292b486b4bbd" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.166931 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cnghh" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.538625 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt"] Nov 24 13:06:47 crc kubenswrapper[4756]: E1124 13:06:47.539624 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9956cbef-8286-4c85-9c91-4e476d82d3d9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.539742 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9956cbef-8286-4c85-9c91-4e476d82d3d9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.541107 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9956cbef-8286-4c85-9c91-4e476d82d3d9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.542187 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549226 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549542 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549644 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549757 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549944 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.549948 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.550042 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.581783 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt"] Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.633872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.633925 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634241 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634300 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634378 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634400 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634421 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjs6h\" (UniqueName: \"kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.634465 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736178 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736213 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjs6h\" (UniqueName: \"kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736293 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736389 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736556 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.736640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.738452 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.743252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.743233 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.744105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.744409 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.745745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.749010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.749913 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.766384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjs6h\" (UniqueName: \"kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mljzt\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:47 crc kubenswrapper[4756]: I1124 13:06:47.877895 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:06:48 crc kubenswrapper[4756]: I1124 13:06:48.183020 4756 generic.go:334] "Generic (PLEG): container finished" podID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerID="7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc" exitCode=0 Nov 24 13:06:48 crc kubenswrapper[4756]: I1124 13:06:48.183114 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerDied","Data":"7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc"} Nov 24 13:06:48 crc kubenswrapper[4756]: I1124 13:06:48.400181 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt"] Nov 24 13:06:48 crc kubenswrapper[4756]: W1124 13:06:48.401984 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cc3b9cc_6392_479d_bb83_af7c5fe6d79d.slice/crio-b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074 WatchSource:0}: Error finding container b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074: Status 404 returned error can't find the container with id b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074 Nov 24 13:06:48 crc kubenswrapper[4756]: I1124 13:06:48.804726 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:06:49 crc kubenswrapper[4756]: I1124 13:06:49.210218 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerStarted","Data":"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75"} Nov 24 13:06:49 crc kubenswrapper[4756]: I1124 13:06:49.211965 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" event={"ID":"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d","Type":"ContainerStarted","Data":"b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074"} Nov 24 13:06:49 crc kubenswrapper[4756]: I1124 13:06:49.232606 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxr7d" podStartSLOduration=2.761175188 podStartE2EDuration="5.232580135s" podCreationTimestamp="2025-11-24 13:06:44 +0000 UTC" firstStartedPulling="2025-11-24 13:06:46.162132411 +0000 UTC m=+2338.519646563" lastFinishedPulling="2025-11-24 13:06:48.633537358 +0000 UTC m=+2340.991051510" observedRunningTime="2025-11-24 13:06:49.227182928 +0000 UTC m=+2341.584697070" watchObservedRunningTime="2025-11-24 13:06:49.232580135 +0000 UTC m=+2341.590094277" Nov 24 13:06:50 crc kubenswrapper[4756]: I1124 13:06:50.226286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" event={"ID":"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d","Type":"ContainerStarted","Data":"b4a82ea0d46716ff66c4322c9821c424a28d304cde577badac73a848525a45f9"} Nov 24 13:06:50 crc kubenswrapper[4756]: I1124 13:06:50.257815 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" podStartSLOduration=2.86041405 podStartE2EDuration="3.257790924s" podCreationTimestamp="2025-11-24 13:06:47 +0000 UTC" firstStartedPulling="2025-11-24 13:06:48.404626959 +0000 UTC m=+2340.762141101" lastFinishedPulling="2025-11-24 13:06:48.802003833 +0000 UTC m=+2341.159517975" observedRunningTime="2025-11-24 13:06:50.247999167 +0000 UTC m=+2342.605513309" watchObservedRunningTime="2025-11-24 13:06:50.257790924 +0000 UTC m=+2342.615305066" Nov 24 13:06:51 crc kubenswrapper[4756]: I1124 13:06:51.476181 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:06:51 crc kubenswrapper[4756]: E1124 13:06:51.476634 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:06:54 crc kubenswrapper[4756]: I1124 13:06:54.975287 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:54 crc kubenswrapper[4756]: I1124 13:06:54.977047 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:55 crc kubenswrapper[4756]: I1124 13:06:55.054643 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:55 crc kubenswrapper[4756]: I1124 13:06:55.356651 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:56 crc kubenswrapper[4756]: I1124 13:06:56.815675 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.324495 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxr7d" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="registry-server" containerID="cri-o://6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75" gracePeriod=2 Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.911240 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.962707 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzpg\" (UniqueName: \"kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg\") pod \"df08e202-6bd9-49b6-8138-cafc3035aeb8\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.962791 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content\") pod \"df08e202-6bd9-49b6-8138-cafc3035aeb8\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.962981 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities\") pod \"df08e202-6bd9-49b6-8138-cafc3035aeb8\" (UID: \"df08e202-6bd9-49b6-8138-cafc3035aeb8\") " Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.966093 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities" (OuterVolumeSpecName: "utilities") pod "df08e202-6bd9-49b6-8138-cafc3035aeb8" (UID: "df08e202-6bd9-49b6-8138-cafc3035aeb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.970950 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg" (OuterVolumeSpecName: "kube-api-access-kfzpg") pod "df08e202-6bd9-49b6-8138-cafc3035aeb8" (UID: "df08e202-6bd9-49b6-8138-cafc3035aeb8"). InnerVolumeSpecName "kube-api-access-kfzpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:06:57 crc kubenswrapper[4756]: I1124 13:06:57.990306 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df08e202-6bd9-49b6-8138-cafc3035aeb8" (UID: "df08e202-6bd9-49b6-8138-cafc3035aeb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.066968 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzpg\" (UniqueName: \"kubernetes.io/projected/df08e202-6bd9-49b6-8138-cafc3035aeb8-kube-api-access-kfzpg\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.067004 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.067013 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df08e202-6bd9-49b6-8138-cafc3035aeb8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.335523 4756 generic.go:334] "Generic (PLEG): container finished" podID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerID="6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75" exitCode=0 Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.335631 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerDied","Data":"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75"} Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.336286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxr7d" event={"ID":"df08e202-6bd9-49b6-8138-cafc3035aeb8","Type":"ContainerDied","Data":"a292de6b7c21fd6bce21f8862489b1aa19e3d690a04919cbdca075a547e2ba74"} Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.335704 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxr7d" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.336360 4756 scope.go:117] "RemoveContainer" containerID="6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.364124 4756 scope.go:117] "RemoveContainer" containerID="7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.381611 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.391824 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxr7d"] Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.408405 4756 scope.go:117] "RemoveContainer" containerID="6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.440361 4756 scope.go:117] "RemoveContainer" containerID="6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75" Nov 24 13:06:58 crc kubenswrapper[4756]: E1124 13:06:58.440960 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75\": container with ID starting with 6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75 not found: ID does not exist" containerID="6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.440998 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75"} err="failed to get container status \"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75\": rpc error: code = NotFound desc = could not find container \"6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75\": container with ID starting with 6442b23864a2fe981c97804a02f451dcef382c5272e5b4969f30e7f83c815a75 not found: ID does not exist" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.441025 4756 scope.go:117] "RemoveContainer" containerID="7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc" Nov 24 13:06:58 crc kubenswrapper[4756]: E1124 13:06:58.441405 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc\": container with ID starting with 7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc not found: ID does not exist" containerID="7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.441434 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc"} err="failed to get container status \"7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc\": rpc error: code = NotFound desc = could not find container \"7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc\": container with ID starting with 7e39d18d994ee32dc600fd324f9036d10aeb182c0566fbc4604dae23935fcbcc not found: ID does not exist" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.441452 4756 scope.go:117] "RemoveContainer" containerID="6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f" Nov 24 13:06:58 crc kubenswrapper[4756]: E1124 13:06:58.441967 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f\": container with ID starting with 6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f not found: ID does not exist" containerID="6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.441997 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f"} err="failed to get container status \"6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f\": rpc error: code = NotFound desc = could not find container \"6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f\": container with ID starting with 6de2bf9de2e88482bc85437d5ba8eed2a02297dacf26e37c65e629b16bb9f59f not found: ID does not exist" Nov 24 13:06:58 crc kubenswrapper[4756]: I1124 13:06:58.489087 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" path="/var/lib/kubelet/pods/df08e202-6bd9-49b6-8138-cafc3035aeb8/volumes" Nov 24 13:07:04 crc kubenswrapper[4756]: I1124 13:07:04.476312 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:07:04 crc kubenswrapper[4756]: E1124 13:07:04.476977 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:07:19 crc kubenswrapper[4756]: I1124 13:07:19.476020 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:07:19 crc kubenswrapper[4756]: E1124 13:07:19.477080 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:07:31 crc kubenswrapper[4756]: I1124 13:07:31.476398 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:07:31 crc kubenswrapper[4756]: E1124 13:07:31.477112 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:07:44 crc kubenswrapper[4756]: I1124 13:07:44.475259 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:07:44 crc kubenswrapper[4756]: E1124 13:07:44.476053 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:07:55 crc kubenswrapper[4756]: I1124 13:07:55.476232 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:07:55 crc kubenswrapper[4756]: E1124 13:07:55.476927 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:08:08 crc kubenswrapper[4756]: I1124 13:08:08.485884 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:08:08 crc kubenswrapper[4756]: E1124 13:08:08.486757 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:08:21 crc kubenswrapper[4756]: I1124 13:08:21.475891 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:08:21 crc kubenswrapper[4756]: E1124 13:08:21.476686 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:08:34 crc kubenswrapper[4756]: I1124 13:08:34.475783 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:08:34 crc kubenswrapper[4756]: E1124 13:08:34.477573 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:08:47 crc kubenswrapper[4756]: I1124 13:08:47.476033 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:08:47 crc kubenswrapper[4756]: E1124 13:08:47.477139 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:08:58 crc kubenswrapper[4756]: I1124 13:08:58.489209 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:08:58 crc kubenswrapper[4756]: E1124 13:08:58.490482 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:09:09 crc kubenswrapper[4756]: I1124 13:09:09.476337 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:09:09 crc kubenswrapper[4756]: E1124 13:09:09.477440 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:09:21 crc kubenswrapper[4756]: I1124 13:09:21.476286 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:09:21 crc kubenswrapper[4756]: E1124 13:09:21.477901 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:09:34 crc kubenswrapper[4756]: I1124 13:09:34.476463 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:09:34 crc kubenswrapper[4756]: E1124 13:09:34.477340 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:09:49 crc kubenswrapper[4756]: I1124 13:09:49.475932 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:09:49 crc kubenswrapper[4756]: E1124 13:09:49.476527 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:09:57 crc kubenswrapper[4756]: I1124 13:09:57.663289 4756 generic.go:334] "Generic (PLEG): container finished" podID="0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" containerID="b4a82ea0d46716ff66c4322c9821c424a28d304cde577badac73a848525a45f9" exitCode=0 Nov 24 13:09:57 crc kubenswrapper[4756]: I1124 13:09:57.663342 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" event={"ID":"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d","Type":"ContainerDied","Data":"b4a82ea0d46716ff66c4322c9821c424a28d304cde577badac73a848525a45f9"} Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.181912 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.215538 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.215887 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.215991 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjs6h\" (UniqueName: \"kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216062 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216122 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216235 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216321 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.216398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle\") pod \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\" (UID: \"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d\") " Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.227949 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.234497 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h" (OuterVolumeSpecName: "kube-api-access-sjs6h") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "kube-api-access-sjs6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.250430 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.263419 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.263900 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.270590 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory" (OuterVolumeSpecName: "inventory") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.279149 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.279610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.282199 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" (UID: "0cc3b9cc-6392-479d-bb83-af7c5fe6d79d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318871 4756 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318907 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318920 4756 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318935 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318967 4756 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318976 4756 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318984 4756 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.318992 4756 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.319000 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjs6h\" (UniqueName: \"kubernetes.io/projected/0cc3b9cc-6392-479d-bb83-af7c5fe6d79d-kube-api-access-sjs6h\") on node \"crc\" DevicePath \"\"" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.686538 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" event={"ID":"0cc3b9cc-6392-479d-bb83-af7c5fe6d79d","Type":"ContainerDied","Data":"b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074"} Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.686583 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b742cb514524e868257aaa032642dfdf3892c603ffe2ed4c5abc9951bc62f074" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.686586 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mljzt" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.802946 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw"] Nov 24 13:09:59 crc kubenswrapper[4756]: E1124 13:09:59.803483 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="extract-content" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.803501 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="extract-content" Nov 24 13:09:59 crc kubenswrapper[4756]: E1124 13:09:59.803538 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="extract-utilities" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.803547 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="extract-utilities" Nov 24 13:09:59 crc kubenswrapper[4756]: E1124 13:09:59.803560 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="registry-server" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.803567 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="registry-server" Nov 24 13:09:59 crc kubenswrapper[4756]: E1124 13:09:59.803604 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.803614 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.803882 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc3b9cc-6392-479d-bb83-af7c5fe6d79d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.804108 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="df08e202-6bd9-49b6-8138-cafc3035aeb8" containerName="registry-server" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.805104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.807091 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sg7df" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.808800 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.809107 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.809142 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.809406 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.815150 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw"] Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.826822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.826950 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.826980 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.826997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6p7q\" (UniqueName: \"kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.827069 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.827090 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.827117 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929285 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929427 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929469 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929497 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6p7q\" (UniqueName: \"kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929576 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.929607 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.933485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.933519 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.934074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.934288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.934926 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.935929 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:09:59 crc kubenswrapper[4756]: I1124 13:09:59.950099 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6p7q\" (UniqueName: \"kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:10:00 crc kubenswrapper[4756]: I1124 13:10:00.123574 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:10:00 crc kubenswrapper[4756]: I1124 13:10:00.679645 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw"] Nov 24 13:10:00 crc kubenswrapper[4756]: W1124 13:10:00.686903 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24a1eee7_f667_475d_9b05_0b9f49a5619c.slice/crio-2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd WatchSource:0}: Error finding container 2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd: Status 404 returned error can't find the container with id 2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd Nov 24 13:10:00 crc kubenswrapper[4756]: I1124 13:10:00.693866 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:10:01 crc kubenswrapper[4756]: I1124 13:10:01.707410 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" event={"ID":"24a1eee7-f667-475d-9b05-0b9f49a5619c","Type":"ContainerStarted","Data":"347aed3e69abb18e22e9b68d344dd6c72909f88552560a96a68c8fb09b43a349"} Nov 24 13:10:01 crc kubenswrapper[4756]: I1124 13:10:01.709212 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" event={"ID":"24a1eee7-f667-475d-9b05-0b9f49a5619c","Type":"ContainerStarted","Data":"2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd"} Nov 24 13:10:01 crc kubenswrapper[4756]: I1124 13:10:01.735450 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" podStartSLOduration=2.202340488 podStartE2EDuration="2.735426346s" podCreationTimestamp="2025-11-24 13:09:59 +0000 UTC" firstStartedPulling="2025-11-24 13:10:00.693624096 +0000 UTC m=+2533.051138238" lastFinishedPulling="2025-11-24 13:10:01.226709954 +0000 UTC m=+2533.584224096" observedRunningTime="2025-11-24 13:10:01.728177338 +0000 UTC m=+2534.085691490" watchObservedRunningTime="2025-11-24 13:10:01.735426346 +0000 UTC m=+2534.092940498" Nov 24 13:10:04 crc kubenswrapper[4756]: I1124 13:10:04.475955 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:10:04 crc kubenswrapper[4756]: E1124 13:10:04.476478 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:10:19 crc kubenswrapper[4756]: I1124 13:10:19.475750 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:10:19 crc kubenswrapper[4756]: E1124 13:10:19.477004 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:10:32 crc kubenswrapper[4756]: I1124 13:10:32.476004 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:10:32 crc kubenswrapper[4756]: E1124 13:10:32.477236 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:10:44 crc kubenswrapper[4756]: I1124 13:10:44.477298 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:10:45 crc kubenswrapper[4756]: I1124 13:10:45.216745 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600"} Nov 24 13:12:27 crc kubenswrapper[4756]: I1124 13:12:27.313911 4756 generic.go:334] "Generic (PLEG): container finished" podID="24a1eee7-f667-475d-9b05-0b9f49a5619c" containerID="347aed3e69abb18e22e9b68d344dd6c72909f88552560a96a68c8fb09b43a349" exitCode=0 Nov 24 13:12:27 crc kubenswrapper[4756]: I1124 13:12:27.314025 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" event={"ID":"24a1eee7-f667-475d-9b05-0b9f49a5619c","Type":"ContainerDied","Data":"347aed3e69abb18e22e9b68d344dd6c72909f88552560a96a68c8fb09b43a349"} Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.871465 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904270 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904357 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6p7q\" (UniqueName: \"kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904383 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904477 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904508 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904592 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.904634 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.913442 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q" (OuterVolumeSpecName: "kube-api-access-v6p7q") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "kube-api-access-v6p7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.922447 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.947649 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.950434 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.952405 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:28 crc kubenswrapper[4756]: E1124 13:12:28.959007 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2 podName:24a1eee7-f667-475d-9b05-0b9f49a5619c nodeName:}" failed. No retries permitted until 2025-11-24 13:12:29.458971282 +0000 UTC m=+2681.816485444 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ceilometer-compute-config-data-2" (UniqueName: "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c") : error deleting /var/lib/kubelet/pods/24a1eee7-f667-475d-9b05-0b9f49a5619c/volume-subpaths: remove /var/lib/kubelet/pods/24a1eee7-f667-475d-9b05-0b9f49a5619c/volume-subpaths: no such file or directory Nov 24 13:12:28 crc kubenswrapper[4756]: I1124 13:12:28.963783 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory" (OuterVolumeSpecName: "inventory") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006610 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006643 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006655 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6p7q\" (UniqueName: \"kubernetes.io/projected/24a1eee7-f667-475d-9b05-0b9f49a5619c-kube-api-access-v6p7q\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006670 4756 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006682 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.006694 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.339492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" event={"ID":"24a1eee7-f667-475d-9b05-0b9f49a5619c","Type":"ContainerDied","Data":"2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd"} Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.339530 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e61add244077da8036313202e3a9ea74e1c88cebb731da4ae595bc9cc4268cd" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.339529 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.516096 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") pod \"24a1eee7-f667-475d-9b05-0b9f49a5619c\" (UID: \"24a1eee7-f667-475d-9b05-0b9f49a5619c\") " Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.521051 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "24a1eee7-f667-475d-9b05-0b9f49a5619c" (UID: "24a1eee7-f667-475d-9b05-0b9f49a5619c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:12:29 crc kubenswrapper[4756]: I1124 13:12:29.622337 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/24a1eee7-f667-475d-9b05-0b9f49a5619c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.647796 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:12:57 crc kubenswrapper[4756]: E1124 13:12:57.648695 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a1eee7-f667-475d-9b05-0b9f49a5619c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.648709 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a1eee7-f667-475d-9b05-0b9f49a5619c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.648918 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a1eee7-f667-475d-9b05-0b9f49a5619c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.650371 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.670635 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.685344 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6x9x\" (UniqueName: \"kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.685538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.685614 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.788539 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6x9x\" (UniqueName: \"kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.788732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.788820 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.789414 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.789485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.810895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6x9x\" (UniqueName: \"kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x\") pod \"certified-operators-tx7hd\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:57 crc kubenswrapper[4756]: I1124 13:12:57.985542 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:12:58 crc kubenswrapper[4756]: I1124 13:12:58.487624 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:12:58 crc kubenswrapper[4756]: I1124 13:12:58.667190 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerStarted","Data":"ce62c7d8de4c66572454d5592e3bd3b2cf0e23c03135c8c210ad7cbcc9c714c9"} Nov 24 13:12:59 crc kubenswrapper[4756]: I1124 13:12:59.682236 4756 generic.go:334] "Generic (PLEG): container finished" podID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerID="417c90913734ad1f4189736f6a854027922aa54fd57be9bb51fcf6ee0dca38ba" exitCode=0 Nov 24 13:12:59 crc kubenswrapper[4756]: I1124 13:12:59.682371 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerDied","Data":"417c90913734ad1f4189736f6a854027922aa54fd57be9bb51fcf6ee0dca38ba"} Nov 24 13:13:00 crc kubenswrapper[4756]: I1124 13:13:00.697202 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerStarted","Data":"b3e00ea494f42e930dad1d552a24aec4f1668d612a6ba94b1e9a4af53efffab4"} Nov 24 13:13:01 crc kubenswrapper[4756]: I1124 13:13:01.710072 4756 generic.go:334] "Generic (PLEG): container finished" podID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerID="b3e00ea494f42e930dad1d552a24aec4f1668d612a6ba94b1e9a4af53efffab4" exitCode=0 Nov 24 13:13:01 crc kubenswrapper[4756]: I1124 13:13:01.710178 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerDied","Data":"b3e00ea494f42e930dad1d552a24aec4f1668d612a6ba94b1e9a4af53efffab4"} Nov 24 13:13:02 crc kubenswrapper[4756]: I1124 13:13:02.723619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerStarted","Data":"4a224661cc0c67d72af23104daefce078bdb225bf14e0d902784a79b3c8b4098"} Nov 24 13:13:02 crc kubenswrapper[4756]: I1124 13:13:02.751099 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tx7hd" podStartSLOduration=3.101621724 podStartE2EDuration="5.75107846s" podCreationTimestamp="2025-11-24 13:12:57 +0000 UTC" firstStartedPulling="2025-11-24 13:12:59.684862952 +0000 UTC m=+2712.042377144" lastFinishedPulling="2025-11-24 13:13:02.334319728 +0000 UTC m=+2714.691833880" observedRunningTime="2025-11-24 13:13:02.748654264 +0000 UTC m=+2715.106168416" watchObservedRunningTime="2025-11-24 13:13:02.75107846 +0000 UTC m=+2715.108592622" Nov 24 13:13:03 crc kubenswrapper[4756]: I1124 13:13:03.480310 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:13:03 crc kubenswrapper[4756]: I1124 13:13:03.480820 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.261289 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.261964 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="prometheus" containerID="cri-o://ab47ea840acc948a00c4aa917a71f2baab9d0baf5519e86c513395c7b9601068" gracePeriod=600 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.262063 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="config-reloader" containerID="cri-o://0cec482adb1b0d7c03f6f4fe6406f63b50d6ca8d44e72059efaa5baf7460cf29" gracePeriod=600 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.262094 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="thanos-sidecar" containerID="cri-o://3a8884d6cece83da44f47b8603a22592214627015eb425cfe95a9e35c7f61cd2" gracePeriod=600 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.786625 4756 generic.go:334] "Generic (PLEG): container finished" podID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerID="3a8884d6cece83da44f47b8603a22592214627015eb425cfe95a9e35c7f61cd2" exitCode=0 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.786939 4756 generic.go:334] "Generic (PLEG): container finished" podID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerID="0cec482adb1b0d7c03f6f4fe6406f63b50d6ca8d44e72059efaa5baf7460cf29" exitCode=0 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.786948 4756 generic.go:334] "Generic (PLEG): container finished" podID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerID="ab47ea840acc948a00c4aa917a71f2baab9d0baf5519e86c513395c7b9601068" exitCode=0 Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.786967 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerDied","Data":"3a8884d6cece83da44f47b8603a22592214627015eb425cfe95a9e35c7f61cd2"} Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.786995 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerDied","Data":"0cec482adb1b0d7c03f6f4fe6406f63b50d6ca8d44e72059efaa5baf7460cf29"} Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.787004 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerDied","Data":"ab47ea840acc948a00c4aa917a71f2baab9d0baf5519e86c513395c7b9601068"} Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.986658 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:07 crc kubenswrapper[4756]: I1124 13:13:07.986702 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.044897 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.273424 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.337044 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.337083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.337151 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l96q\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.337188 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.337960 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338030 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338069 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338095 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338185 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338241 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.338301 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out\") pod \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\" (UID: \"78c50331-2c8a-4ebf-8dfc-66456b7167c0\") " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.340214 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.343455 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.343709 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q" (OuterVolumeSpecName: "kube-api-access-7l96q") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "kube-api-access-7l96q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.345370 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config" (OuterVolumeSpecName: "config") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.346572 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.346759 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.347922 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.348388 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.350487 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out" (OuterVolumeSpecName: "config-out") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.386597 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.421348 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config" (OuterVolumeSpecName: "web-config") pod "78c50331-2c8a-4ebf-8dfc-66456b7167c0" (UID: "78c50331-2c8a-4ebf-8dfc-66456b7167c0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440723 4756 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440815 4756 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440836 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l96q\" (UniqueName: \"kubernetes.io/projected/78c50331-2c8a-4ebf-8dfc-66456b7167c0-kube-api-access-7l96q\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440849 4756 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440929 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") on node \"crc\" " Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440970 4756 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440986 4756 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.440999 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.441011 4756 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78c50331-2c8a-4ebf-8dfc-66456b7167c0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.441042 4756 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c50331-2c8a-4ebf-8dfc-66456b7167c0-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.441051 4756 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78c50331-2c8a-4ebf-8dfc-66456b7167c0-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.464182 4756 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.465120 4756 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f") on node "crc" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.543056 4756 reconciler_common.go:293] "Volume detached for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.798392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78c50331-2c8a-4ebf-8dfc-66456b7167c0","Type":"ContainerDied","Data":"f0592957419004be7389cfdc06a9976c8106f1bf021c023b3c274abdb3cf96e2"} Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.798714 4756 scope.go:117] "RemoveContainer" containerID="3a8884d6cece83da44f47b8603a22592214627015eb425cfe95a9e35c7f61cd2" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.798487 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.824146 4756 scope.go:117] "RemoveContainer" containerID="0cec482adb1b0d7c03f6f4fe6406f63b50d6ca8d44e72059efaa5baf7460cf29" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.827212 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.837870 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.861418 4756 scope.go:117] "RemoveContainer" containerID="ab47ea840acc948a00c4aa917a71f2baab9d0baf5519e86c513395c7b9601068" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.864856 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.876235 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:08 crc kubenswrapper[4756]: E1124 13:13:08.876710 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="thanos-sidecar" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.876730 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="thanos-sidecar" Nov 24 13:13:08 crc kubenswrapper[4756]: E1124 13:13:08.876749 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="config-reloader" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.876758 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="config-reloader" Nov 24 13:13:08 crc kubenswrapper[4756]: E1124 13:13:08.876778 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="prometheus" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.876786 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="prometheus" Nov 24 13:13:08 crc kubenswrapper[4756]: E1124 13:13:08.876827 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="init-config-reloader" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.876837 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="init-config-reloader" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.877057 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="thanos-sidecar" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.877076 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="prometheus" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.877095 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" containerName="config-reloader" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.880120 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.884099 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.884349 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d4qzx" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.884897 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.885391 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.892699 4756 scope.go:117] "RemoveContainer" containerID="81ec4e8709b8ea5fd4467ff62ade07a60d36fa89d4b5492bd8ae9ce6f63dfe45" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.895313 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.900681 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.905826 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.951908 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.951983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.952059 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djm2r\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-kube-api-access-djm2r\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.952144 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.955181 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.956743 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.956861 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/53e70a60-dcdd-4bec-b24f-b37ed751bb90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.956900 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.956963 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.956991 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.957078 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:08 crc kubenswrapper[4756]: I1124 13:13:08.957240 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059665 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djm2r\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-kube-api-access-djm2r\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059794 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059865 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/53e70a60-dcdd-4bec-b24f-b37ed751bb90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059949 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.059989 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.060016 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.060071 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.060131 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.060203 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.060249 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.065356 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/53e70a60-dcdd-4bec-b24f-b37ed751bb90-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.065909 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.069568 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.070335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.070920 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.071139 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53e70a60-dcdd-4bec-b24f-b37ed751bb90-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.072450 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.073502 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.073546 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ac3567aefb4ff022402a71c4c19bba7ed7a13b4fde27606ef830df8410391bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.081080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.082524 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/53e70a60-dcdd-4bec-b24f-b37ed751bb90-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.088290 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djm2r\" (UniqueName: \"kubernetes.io/projected/53e70a60-dcdd-4bec-b24f-b37ed751bb90-kube-api-access-djm2r\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.133532 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0be5da8-3d70-4d6f-b59f-e8f7b67ef65f\") pod \"prometheus-metric-storage-0\" (UID: \"53e70a60-dcdd-4bec-b24f-b37ed751bb90\") " pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.208925 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.652830 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 13:13:09 crc kubenswrapper[4756]: W1124 13:13:09.654755 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e70a60_dcdd_4bec_b24f_b37ed751bb90.slice/crio-ebb7f5f52aaa8b573915a7b0e7ff6230260f2e3a35f4e7708ac560899487bc12 WatchSource:0}: Error finding container ebb7f5f52aaa8b573915a7b0e7ff6230260f2e3a35f4e7708ac560899487bc12: Status 404 returned error can't find the container with id ebb7f5f52aaa8b573915a7b0e7ff6230260f2e3a35f4e7708ac560899487bc12 Nov 24 13:13:09 crc kubenswrapper[4756]: I1124 13:13:09.814814 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerStarted","Data":"ebb7f5f52aaa8b573915a7b0e7ff6230260f2e3a35f4e7708ac560899487bc12"} Nov 24 13:13:10 crc kubenswrapper[4756]: I1124 13:13:10.488244 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c50331-2c8a-4ebf-8dfc-66456b7167c0" path="/var/lib/kubelet/pods/78c50331-2c8a-4ebf-8dfc-66456b7167c0/volumes" Nov 24 13:13:10 crc kubenswrapper[4756]: I1124 13:13:10.825828 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tx7hd" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="registry-server" containerID="cri-o://4a224661cc0c67d72af23104daefce078bdb225bf14e0d902784a79b3c8b4098" gracePeriod=2 Nov 24 13:13:11 crc kubenswrapper[4756]: I1124 13:13:11.839294 4756 generic.go:334] "Generic (PLEG): container finished" podID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerID="4a224661cc0c67d72af23104daefce078bdb225bf14e0d902784a79b3c8b4098" exitCode=0 Nov 24 13:13:11 crc kubenswrapper[4756]: I1124 13:13:11.839391 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerDied","Data":"4a224661cc0c67d72af23104daefce078bdb225bf14e0d902784a79b3c8b4098"} Nov 24 13:13:11 crc kubenswrapper[4756]: I1124 13:13:11.839660 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tx7hd" event={"ID":"0bf17795-c1c5-4448-84df-f12e79cca35f","Type":"ContainerDied","Data":"ce62c7d8de4c66572454d5592e3bd3b2cf0e23c03135c8c210ad7cbcc9c714c9"} Nov 24 13:13:11 crc kubenswrapper[4756]: I1124 13:13:11.839687 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce62c7d8de4c66572454d5592e3bd3b2cf0e23c03135c8c210ad7cbcc9c714c9" Nov 24 13:13:11 crc kubenswrapper[4756]: I1124 13:13:11.867247 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.026767 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities\") pod \"0bf17795-c1c5-4448-84df-f12e79cca35f\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.026815 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content\") pod \"0bf17795-c1c5-4448-84df-f12e79cca35f\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.026949 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6x9x\" (UniqueName: \"kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x\") pod \"0bf17795-c1c5-4448-84df-f12e79cca35f\" (UID: \"0bf17795-c1c5-4448-84df-f12e79cca35f\") " Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.027514 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities" (OuterVolumeSpecName: "utilities") pod "0bf17795-c1c5-4448-84df-f12e79cca35f" (UID: "0bf17795-c1c5-4448-84df-f12e79cca35f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.033730 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x" (OuterVolumeSpecName: "kube-api-access-s6x9x") pod "0bf17795-c1c5-4448-84df-f12e79cca35f" (UID: "0bf17795-c1c5-4448-84df-f12e79cca35f"). InnerVolumeSpecName "kube-api-access-s6x9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.077962 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bf17795-c1c5-4448-84df-f12e79cca35f" (UID: "0bf17795-c1c5-4448-84df-f12e79cca35f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.129273 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6x9x\" (UniqueName: \"kubernetes.io/projected/0bf17795-c1c5-4448-84df-f12e79cca35f-kube-api-access-s6x9x\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.129306 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.129315 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf17795-c1c5-4448-84df-f12e79cca35f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.850176 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tx7hd" Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.885414 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:13:12 crc kubenswrapper[4756]: I1124 13:13:12.896845 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tx7hd"] Nov 24 13:13:13 crc kubenswrapper[4756]: I1124 13:13:13.888797 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerStarted","Data":"26debfa93cbbb17e4313ca3c87deb33661a75a0191b01318c2c50dceb0a6c6d0"} Nov 24 13:13:14 crc kubenswrapper[4756]: I1124 13:13:14.487043 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" path="/var/lib/kubelet/pods/0bf17795-c1c5-4448-84df-f12e79cca35f/volumes" Nov 24 13:13:23 crc kubenswrapper[4756]: I1124 13:13:23.001554 4756 generic.go:334] "Generic (PLEG): container finished" podID="53e70a60-dcdd-4bec-b24f-b37ed751bb90" containerID="26debfa93cbbb17e4313ca3c87deb33661a75a0191b01318c2c50dceb0a6c6d0" exitCode=0 Nov 24 13:13:23 crc kubenswrapper[4756]: I1124 13:13:23.001636 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerDied","Data":"26debfa93cbbb17e4313ca3c87deb33661a75a0191b01318c2c50dceb0a6c6d0"} Nov 24 13:13:24 crc kubenswrapper[4756]: I1124 13:13:24.015335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerStarted","Data":"c0eac5b16bb2bb49f457b72bc8a490139f89810901a6eb36162a050bc67a46b0"} Nov 24 13:13:27 crc kubenswrapper[4756]: I1124 13:13:27.056564 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerStarted","Data":"7eb6569f63cc4754ac035764c48281a7bcd972460d2859cbeb2bc7985cfa0fa1"} Nov 24 13:13:27 crc kubenswrapper[4756]: I1124 13:13:27.057331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"53e70a60-dcdd-4bec-b24f-b37ed751bb90","Type":"ContainerStarted","Data":"378125e28f53aa47e0cbe8409d23c5c627f571f76b9ec3ab976c664f2303eb71"} Nov 24 13:13:27 crc kubenswrapper[4756]: I1124 13:13:27.118576 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.118550717 podStartE2EDuration="19.118550717s" podCreationTimestamp="2025-11-24 13:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:13:27.10173002 +0000 UTC m=+2739.459244182" watchObservedRunningTime="2025-11-24 13:13:27.118550717 +0000 UTC m=+2739.476064879" Nov 24 13:13:29 crc kubenswrapper[4756]: I1124 13:13:29.209339 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:33 crc kubenswrapper[4756]: I1124 13:13:33.478999 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:13:33 crc kubenswrapper[4756]: I1124 13:13:33.479474 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:13:39 crc kubenswrapper[4756]: I1124 13:13:39.211803 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:39 crc kubenswrapper[4756]: I1124 13:13:39.216169 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:40 crc kubenswrapper[4756]: I1124 13:13:40.189514 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.702833 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 13:13:48 crc kubenswrapper[4756]: E1124 13:13:48.703856 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="registry-server" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.703874 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="registry-server" Nov 24 13:13:48 crc kubenswrapper[4756]: E1124 13:13:48.703908 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="extract-utilities" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.703943 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="extract-utilities" Nov 24 13:13:48 crc kubenswrapper[4756]: E1124 13:13:48.703954 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="extract-content" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.703964 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="extract-content" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.705215 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf17795-c1c5-4448-84df-f12e79cca35f" containerName="registry-server" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.706680 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.706787 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.709515 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.709567 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.709625 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hwjbh" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.713261 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.763850 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.764345 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.764488 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866335 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866397 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866479 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866601 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866699 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866808 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.866848 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.867014 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.867047 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzllb\" (UniqueName: \"kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.867376 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.868763 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.875455 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.968699 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.968814 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.968886 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzllb\" (UniqueName: \"kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.968943 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.969011 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.969057 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.969194 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.969981 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.970346 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.973550 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.976943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:48 crc kubenswrapper[4756]: I1124 13:13:48.995216 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzllb\" (UniqueName: \"kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:49 crc kubenswrapper[4756]: I1124 13:13:49.014611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " pod="openstack/tempest-tests-tempest" Nov 24 13:13:49 crc kubenswrapper[4756]: I1124 13:13:49.039064 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:13:49 crc kubenswrapper[4756]: I1124 13:13:49.687633 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 13:13:50 crc kubenswrapper[4756]: I1124 13:13:50.706396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"931a5dda-ad1f-4595-a5b8-3b1820afb648","Type":"ContainerStarted","Data":"55e3ca6f68bae72991e1e51b5d5b8841f0cec1f7195c8c7bbc4db116b06ae275"} Nov 24 13:14:03 crc kubenswrapper[4756]: E1124 13:14:03.160678 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.217:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Nov 24 13:14:03 crc kubenswrapper[4756]: E1124 13:14:03.161184 4756 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.217:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Nov 24 13:14:03 crc kubenswrapper[4756]: E1124 13:14:03.161370 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.217:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzllb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(931a5dda-ad1f-4595-a5b8-3b1820afb648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 13:14:03 crc kubenswrapper[4756]: E1124 13:14:03.162700 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="931a5dda-ad1f-4595-a5b8-3b1820afb648" Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.478806 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.478914 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.479000 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.480199 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.480387 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600" gracePeriod=600 Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.861041 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600" exitCode=0 Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.861118 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600"} Nov 24 13:14:03 crc kubenswrapper[4756]: I1124 13:14:03.861366 4756 scope.go:117] "RemoveContainer" containerID="432de5fdb96ed6a255a72de15217d2342095338b1a2b647267fe89829048a4c3" Nov 24 13:14:03 crc kubenswrapper[4756]: E1124 13:14:03.863342 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.217:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest\\\"\"" pod="openstack/tempest-tests-tempest" podUID="931a5dda-ad1f-4595-a5b8-3b1820afb648" Nov 24 13:14:04 crc kubenswrapper[4756]: I1124 13:14:04.878474 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622"} Nov 24 13:14:19 crc kubenswrapper[4756]: I1124 13:14:19.055996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"931a5dda-ad1f-4595-a5b8-3b1820afb648","Type":"ContainerStarted","Data":"40659af9dc86c4fbf7aaff48df1e1dcbeef33e3a1a23de6053348ba1586b5ce0"} Nov 24 13:14:19 crc kubenswrapper[4756]: I1124 13:14:19.083826 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.217573567 podStartE2EDuration="32.083798092s" podCreationTimestamp="2025-11-24 13:13:47 +0000 UTC" firstStartedPulling="2025-11-24 13:13:49.703960053 +0000 UTC m=+2762.061474195" lastFinishedPulling="2025-11-24 13:14:17.570184538 +0000 UTC m=+2789.927698720" observedRunningTime="2025-11-24 13:14:19.078171919 +0000 UTC m=+2791.435686061" watchObservedRunningTime="2025-11-24 13:14:19.083798092 +0000 UTC m=+2791.441312254" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.150432 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m"] Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.155071 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.157337 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.158525 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.166323 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m"] Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.203868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.203926 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfp45\" (UniqueName: \"kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.204106 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.305874 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.305923 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfp45\" (UniqueName: \"kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.306028 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.307409 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.312431 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.323408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfp45\" (UniqueName: \"kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45\") pod \"collect-profiles-29399835-hkz8m\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:00 crc kubenswrapper[4756]: I1124 13:15:00.518657 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:01 crc kubenswrapper[4756]: I1124 13:15:01.015201 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m"] Nov 24 13:15:01 crc kubenswrapper[4756]: I1124 13:15:01.550819 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" event={"ID":"6b6d1101-020e-478a-ad65-5a6b9da5e271","Type":"ContainerStarted","Data":"22ca88d93d1871afd559fa583ef534b987418871d47c0d7bf76683b04d7b61a7"} Nov 24 13:15:01 crc kubenswrapper[4756]: I1124 13:15:01.551301 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" event={"ID":"6b6d1101-020e-478a-ad65-5a6b9da5e271","Type":"ContainerStarted","Data":"ddda27efe0b6558f11aaee1d5ac636c76a9110d26db685f8cc807265040d8cb1"} Nov 24 13:15:01 crc kubenswrapper[4756]: I1124 13:15:01.571043 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" podStartSLOduration=1.571023615 podStartE2EDuration="1.571023615s" podCreationTimestamp="2025-11-24 13:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:15:01.564995691 +0000 UTC m=+2833.922509853" watchObservedRunningTime="2025-11-24 13:15:01.571023615 +0000 UTC m=+2833.928537767" Nov 24 13:15:02 crc kubenswrapper[4756]: I1124 13:15:02.566326 4756 generic.go:334] "Generic (PLEG): container finished" podID="6b6d1101-020e-478a-ad65-5a6b9da5e271" containerID="22ca88d93d1871afd559fa583ef534b987418871d47c0d7bf76683b04d7b61a7" exitCode=0 Nov 24 13:15:02 crc kubenswrapper[4756]: I1124 13:15:02.566394 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" event={"ID":"6b6d1101-020e-478a-ad65-5a6b9da5e271","Type":"ContainerDied","Data":"22ca88d93d1871afd559fa583ef534b987418871d47c0d7bf76683b04d7b61a7"} Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.048935 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.103797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume\") pod \"6b6d1101-020e-478a-ad65-5a6b9da5e271\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.103902 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfp45\" (UniqueName: \"kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45\") pod \"6b6d1101-020e-478a-ad65-5a6b9da5e271\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.103950 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume\") pod \"6b6d1101-020e-478a-ad65-5a6b9da5e271\" (UID: \"6b6d1101-020e-478a-ad65-5a6b9da5e271\") " Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.105085 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b6d1101-020e-478a-ad65-5a6b9da5e271" (UID: "6b6d1101-020e-478a-ad65-5a6b9da5e271"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.112079 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b6d1101-020e-478a-ad65-5a6b9da5e271" (UID: "6b6d1101-020e-478a-ad65-5a6b9da5e271"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.112120 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45" (OuterVolumeSpecName: "kube-api-access-zfp45") pod "6b6d1101-020e-478a-ad65-5a6b9da5e271" (UID: "6b6d1101-020e-478a-ad65-5a6b9da5e271"). InnerVolumeSpecName "kube-api-access-zfp45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.205894 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfp45\" (UniqueName: \"kubernetes.io/projected/6b6d1101-020e-478a-ad65-5a6b9da5e271-kube-api-access-zfp45\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.205967 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6d1101-020e-478a-ad65-5a6b9da5e271-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.205979 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b6d1101-020e-478a-ad65-5a6b9da5e271-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.615021 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" event={"ID":"6b6d1101-020e-478a-ad65-5a6b9da5e271","Type":"ContainerDied","Data":"ddda27efe0b6558f11aaee1d5ac636c76a9110d26db685f8cc807265040d8cb1"} Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.615085 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddda27efe0b6558f11aaee1d5ac636c76a9110d26db685f8cc807265040d8cb1" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.615094 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m" Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.704783 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr"] Nov 24 13:15:04 crc kubenswrapper[4756]: I1124 13:15:04.718354 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399790-5rwrr"] Nov 24 13:15:06 crc kubenswrapper[4756]: I1124 13:15:06.488556 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfeb738d-4835-45a7-90a2-440d45459f4d" path="/var/lib/kubelet/pods/bfeb738d-4835-45a7-90a2-440d45459f4d/volumes" Nov 24 13:15:20 crc kubenswrapper[4756]: I1124 13:15:20.093406 4756 scope.go:117] "RemoveContainer" containerID="50b2a6915da19d6edc0fc76894592f22ec1f0e1d6973aae18f36cb07595223ad" Nov 24 13:16:03 crc kubenswrapper[4756]: I1124 13:16:03.478978 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:16:03 crc kubenswrapper[4756]: I1124 13:16:03.479545 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:16:04 crc kubenswrapper[4756]: I1124 13:16:04.964010 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:04 crc kubenswrapper[4756]: E1124 13:16:04.967731 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6d1101-020e-478a-ad65-5a6b9da5e271" containerName="collect-profiles" Nov 24 13:16:04 crc kubenswrapper[4756]: I1124 13:16:04.967950 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6d1101-020e-478a-ad65-5a6b9da5e271" containerName="collect-profiles" Nov 24 13:16:04 crc kubenswrapper[4756]: I1124 13:16:04.968594 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6d1101-020e-478a-ad65-5a6b9da5e271" containerName="collect-profiles" Nov 24 13:16:04 crc kubenswrapper[4756]: I1124 13:16:04.971580 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:04 crc kubenswrapper[4756]: I1124 13:16:04.981698 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.114562 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.114636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.114685 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnpp\" (UniqueName: \"kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.217551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.217629 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.217672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnpp\" (UniqueName: \"kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.218234 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.218308 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.259195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnpp\" (UniqueName: \"kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp\") pod \"community-operators-s2bst\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.316064 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:05 crc kubenswrapper[4756]: I1124 13:16:05.900918 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:06 crc kubenswrapper[4756]: I1124 13:16:06.350430 4756 generic.go:334] "Generic (PLEG): container finished" podID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerID="5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015" exitCode=0 Nov 24 13:16:06 crc kubenswrapper[4756]: I1124 13:16:06.350497 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerDied","Data":"5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015"} Nov 24 13:16:06 crc kubenswrapper[4756]: I1124 13:16:06.350540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerStarted","Data":"9989354edf93f2c2c898a37480b9fc7a7f240f878d4ba9920fc79abe66022bcd"} Nov 24 13:16:06 crc kubenswrapper[4756]: I1124 13:16:06.353879 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:16:07 crc kubenswrapper[4756]: I1124 13:16:07.363999 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerStarted","Data":"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6"} Nov 24 13:16:08 crc kubenswrapper[4756]: I1124 13:16:08.376506 4756 generic.go:334] "Generic (PLEG): container finished" podID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerID="cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6" exitCode=0 Nov 24 13:16:08 crc kubenswrapper[4756]: I1124 13:16:08.376576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerDied","Data":"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6"} Nov 24 13:16:09 crc kubenswrapper[4756]: I1124 13:16:09.388462 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerStarted","Data":"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302"} Nov 24 13:16:09 crc kubenswrapper[4756]: I1124 13:16:09.414810 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2bst" podStartSLOduration=2.754425953 podStartE2EDuration="5.414792228s" podCreationTimestamp="2025-11-24 13:16:04 +0000 UTC" firstStartedPulling="2025-11-24 13:16:06.353444411 +0000 UTC m=+2898.710958593" lastFinishedPulling="2025-11-24 13:16:09.013810716 +0000 UTC m=+2901.371324868" observedRunningTime="2025-11-24 13:16:09.408555929 +0000 UTC m=+2901.766070071" watchObservedRunningTime="2025-11-24 13:16:09.414792228 +0000 UTC m=+2901.772306370" Nov 24 13:16:15 crc kubenswrapper[4756]: I1124 13:16:15.317282 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:15 crc kubenswrapper[4756]: I1124 13:16:15.317920 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:15 crc kubenswrapper[4756]: I1124 13:16:15.374127 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:15 crc kubenswrapper[4756]: I1124 13:16:15.571084 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:15 crc kubenswrapper[4756]: I1124 13:16:15.629565 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:17 crc kubenswrapper[4756]: I1124 13:16:17.485905 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2bst" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="registry-server" containerID="cri-o://0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302" gracePeriod=2 Nov 24 13:16:17 crc kubenswrapper[4756]: I1124 13:16:17.955611 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.031587 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content\") pod \"51aa37f3-f6e5-4c74-8590-d603da72ae02\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.031862 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnpp\" (UniqueName: \"kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp\") pod \"51aa37f3-f6e5-4c74-8590-d603da72ae02\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.031972 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities\") pod \"51aa37f3-f6e5-4c74-8590-d603da72ae02\" (UID: \"51aa37f3-f6e5-4c74-8590-d603da72ae02\") " Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.032977 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities" (OuterVolumeSpecName: "utilities") pod "51aa37f3-f6e5-4c74-8590-d603da72ae02" (UID: "51aa37f3-f6e5-4c74-8590-d603da72ae02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.038553 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp" (OuterVolumeSpecName: "kube-api-access-7rnpp") pod "51aa37f3-f6e5-4c74-8590-d603da72ae02" (UID: "51aa37f3-f6e5-4c74-8590-d603da72ae02"). InnerVolumeSpecName "kube-api-access-7rnpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.088448 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51aa37f3-f6e5-4c74-8590-d603da72ae02" (UID: "51aa37f3-f6e5-4c74-8590-d603da72ae02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.134486 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.134522 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnpp\" (UniqueName: \"kubernetes.io/projected/51aa37f3-f6e5-4c74-8590-d603da72ae02-kube-api-access-7rnpp\") on node \"crc\" DevicePath \"\"" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.134535 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51aa37f3-f6e5-4c74-8590-d603da72ae02-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.499282 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2bst" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.499293 4756 generic.go:334] "Generic (PLEG): container finished" podID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerID="0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302" exitCode=0 Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.499315 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerDied","Data":"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302"} Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.499401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2bst" event={"ID":"51aa37f3-f6e5-4c74-8590-d603da72ae02","Type":"ContainerDied","Data":"9989354edf93f2c2c898a37480b9fc7a7f240f878d4ba9920fc79abe66022bcd"} Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.499429 4756 scope.go:117] "RemoveContainer" containerID="0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.532248 4756 scope.go:117] "RemoveContainer" containerID="cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.537717 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.548700 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2bst"] Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.560255 4756 scope.go:117] "RemoveContainer" containerID="5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.628921 4756 scope.go:117] "RemoveContainer" containerID="0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302" Nov 24 13:16:18 crc kubenswrapper[4756]: E1124 13:16:18.630170 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302\": container with ID starting with 0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302 not found: ID does not exist" containerID="0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.630218 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302"} err="failed to get container status \"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302\": rpc error: code = NotFound desc = could not find container \"0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302\": container with ID starting with 0c3420e0ecc9ddeec6c126ffc9521788cff79c0361fa29d372fc1fe154a11302 not found: ID does not exist" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.630247 4756 scope.go:117] "RemoveContainer" containerID="cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6" Nov 24 13:16:18 crc kubenswrapper[4756]: E1124 13:16:18.630567 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6\": container with ID starting with cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6 not found: ID does not exist" containerID="cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.630599 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6"} err="failed to get container status \"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6\": rpc error: code = NotFound desc = could not find container \"cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6\": container with ID starting with cbf0b38a94d501b95be3eb20a28f77b4f6daff3a252f60b76e50f2de21d0f5e6 not found: ID does not exist" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.630619 4756 scope.go:117] "RemoveContainer" containerID="5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015" Nov 24 13:16:18 crc kubenswrapper[4756]: E1124 13:16:18.631046 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015\": container with ID starting with 5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015 not found: ID does not exist" containerID="5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015" Nov 24 13:16:18 crc kubenswrapper[4756]: I1124 13:16:18.631081 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015"} err="failed to get container status \"5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015\": rpc error: code = NotFound desc = could not find container \"5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015\": container with ID starting with 5b13b70de061e180736375552ef61f2fd2cfa5ff22440c7719741c6fc834e015 not found: ID does not exist" Nov 24 13:16:20 crc kubenswrapper[4756]: I1124 13:16:20.497847 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" path="/var/lib/kubelet/pods/51aa37f3-f6e5-4c74-8590-d603da72ae02/volumes" Nov 24 13:16:33 crc kubenswrapper[4756]: I1124 13:16:33.479322 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:16:33 crc kubenswrapper[4756]: I1124 13:16:33.479835 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:17:03 crc kubenswrapper[4756]: I1124 13:17:03.479356 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:17:03 crc kubenswrapper[4756]: I1124 13:17:03.479841 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:17:03 crc kubenswrapper[4756]: I1124 13:17:03.479895 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:17:03 crc kubenswrapper[4756]: I1124 13:17:03.480537 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:17:03 crc kubenswrapper[4756]: I1124 13:17:03.480591 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" gracePeriod=600 Nov 24 13:17:03 crc kubenswrapper[4756]: E1124 13:17:03.619660 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:04 crc kubenswrapper[4756]: I1124 13:17:04.025893 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" exitCode=0 Nov 24 13:17:04 crc kubenswrapper[4756]: I1124 13:17:04.025968 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622"} Nov 24 13:17:04 crc kubenswrapper[4756]: I1124 13:17:04.026260 4756 scope.go:117] "RemoveContainer" containerID="968adb00eba3972fea3c57ea56020fa61fc87024729430d232e9e3d2fd8e7600" Nov 24 13:17:04 crc kubenswrapper[4756]: I1124 13:17:04.027031 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:17:04 crc kubenswrapper[4756]: E1124 13:17:04.027398 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:18 crc kubenswrapper[4756]: I1124 13:17:18.476979 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:17:18 crc kubenswrapper[4756]: E1124 13:17:18.477804 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:31 crc kubenswrapper[4756]: I1124 13:17:31.476172 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:17:31 crc kubenswrapper[4756]: E1124 13:17:31.477081 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:45 crc kubenswrapper[4756]: I1124 13:17:45.475751 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:17:45 crc kubenswrapper[4756]: E1124 13:17:45.476521 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.545136 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:17:56 crc kubenswrapper[4756]: E1124 13:17:56.546262 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="extract-utilities" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.546282 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="extract-utilities" Nov 24 13:17:56 crc kubenswrapper[4756]: E1124 13:17:56.546308 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="registry-server" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.546316 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="registry-server" Nov 24 13:17:56 crc kubenswrapper[4756]: E1124 13:17:56.546362 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="extract-content" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.546373 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="extract-content" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.546624 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="51aa37f3-f6e5-4c74-8590-d603da72ae02" containerName="registry-server" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.549441 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.580344 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.719533 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.719625 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djpd\" (UniqueName: \"kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.719672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.821266 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.821431 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djpd\" (UniqueName: \"kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.821500 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.821816 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.821914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.841756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djpd\" (UniqueName: \"kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd\") pod \"redhat-operators-99jjh\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:56 crc kubenswrapper[4756]: I1124 13:17:56.886273 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:17:57 crc kubenswrapper[4756]: I1124 13:17:57.362050 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:17:57 crc kubenswrapper[4756]: I1124 13:17:57.614582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerStarted","Data":"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b"} Nov 24 13:17:57 crc kubenswrapper[4756]: I1124 13:17:57.614871 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerStarted","Data":"62f49ca08831deeee04bd72fbe95f55c55e330eb6a84bc6c3a63169f0cfe6cff"} Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.640080 4756 generic.go:334] "Generic (PLEG): container finished" podID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerID="aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b" exitCode=0 Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.640124 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerDied","Data":"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b"} Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.733635 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.736788 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.748251 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.866789 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.866852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7nw\" (UniqueName: \"kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.866920 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.968316 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.968459 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.968501 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7nw\" (UniqueName: \"kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.968949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.969070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:58 crc kubenswrapper[4756]: I1124 13:17:58.986080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7nw\" (UniqueName: \"kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw\") pod \"redhat-marketplace-gw84p\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:59 crc kubenswrapper[4756]: I1124 13:17:59.065697 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:17:59 crc kubenswrapper[4756]: I1124 13:17:59.478571 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:17:59 crc kubenswrapper[4756]: E1124 13:17:59.479369 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:17:59 crc kubenswrapper[4756]: I1124 13:17:59.519421 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:17:59 crc kubenswrapper[4756]: W1124 13:17:59.520415 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c1e799b_ad89_464f_8392_f3740818e984.slice/crio-d4dde7da8461dd69c791d43a23af4595cbead935a9adda77bfa7e1115e85b0c8 WatchSource:0}: Error finding container d4dde7da8461dd69c791d43a23af4595cbead935a9adda77bfa7e1115e85b0c8: Status 404 returned error can't find the container with id d4dde7da8461dd69c791d43a23af4595cbead935a9adda77bfa7e1115e85b0c8 Nov 24 13:17:59 crc kubenswrapper[4756]: I1124 13:17:59.658497 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerStarted","Data":"d4dde7da8461dd69c791d43a23af4595cbead935a9adda77bfa7e1115e85b0c8"} Nov 24 13:18:00 crc kubenswrapper[4756]: I1124 13:18:00.669822 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerStarted","Data":"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f"} Nov 24 13:18:00 crc kubenswrapper[4756]: I1124 13:18:00.671725 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c1e799b-ad89-464f-8392-f3740818e984" containerID="6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501" exitCode=0 Nov 24 13:18:00 crc kubenswrapper[4756]: I1124 13:18:00.671784 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerDied","Data":"6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501"} Nov 24 13:18:02 crc kubenswrapper[4756]: I1124 13:18:02.707736 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerStarted","Data":"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d"} Nov 24 13:18:09 crc kubenswrapper[4756]: I1124 13:18:09.784191 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerDied","Data":"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d"} Nov 24 13:18:09 crc kubenswrapper[4756]: I1124 13:18:09.784363 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c1e799b-ad89-464f-8392-f3740818e984" containerID="0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d" exitCode=0 Nov 24 13:18:10 crc kubenswrapper[4756]: I1124 13:18:10.476338 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:18:10 crc kubenswrapper[4756]: E1124 13:18:10.476970 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:18:10 crc kubenswrapper[4756]: I1124 13:18:10.795773 4756 generic.go:334] "Generic (PLEG): container finished" podID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerID="24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f" exitCode=0 Nov 24 13:18:10 crc kubenswrapper[4756]: I1124 13:18:10.795864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerDied","Data":"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f"} Nov 24 13:18:10 crc kubenswrapper[4756]: I1124 13:18:10.799255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerStarted","Data":"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f"} Nov 24 13:18:10 crc kubenswrapper[4756]: I1124 13:18:10.838242 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gw84p" podStartSLOduration=3.201266367 podStartE2EDuration="12.83819603s" podCreationTimestamp="2025-11-24 13:17:58 +0000 UTC" firstStartedPulling="2025-11-24 13:18:00.673284821 +0000 UTC m=+3013.030798963" lastFinishedPulling="2025-11-24 13:18:10.310214494 +0000 UTC m=+3022.667728626" observedRunningTime="2025-11-24 13:18:10.834334235 +0000 UTC m=+3023.191848377" watchObservedRunningTime="2025-11-24 13:18:10.83819603 +0000 UTC m=+3023.195710202" Nov 24 13:18:12 crc kubenswrapper[4756]: I1124 13:18:12.819841 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerStarted","Data":"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7"} Nov 24 13:18:12 crc kubenswrapper[4756]: I1124 13:18:12.859951 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99jjh" podStartSLOduration=3.7147902 podStartE2EDuration="16.859927949s" podCreationTimestamp="2025-11-24 13:17:56 +0000 UTC" firstStartedPulling="2025-11-24 13:17:58.642128625 +0000 UTC m=+3010.999642777" lastFinishedPulling="2025-11-24 13:18:11.787266384 +0000 UTC m=+3024.144780526" observedRunningTime="2025-11-24 13:18:12.849030863 +0000 UTC m=+3025.206545025" watchObservedRunningTime="2025-11-24 13:18:12.859927949 +0000 UTC m=+3025.217442091" Nov 24 13:18:16 crc kubenswrapper[4756]: I1124 13:18:16.886471 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:16 crc kubenswrapper[4756]: I1124 13:18:16.887449 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:17 crc kubenswrapper[4756]: I1124 13:18:17.952357 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-99jjh" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="registry-server" probeResult="failure" output=< Nov 24 13:18:17 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:18:17 crc kubenswrapper[4756]: > Nov 24 13:18:19 crc kubenswrapper[4756]: I1124 13:18:19.066781 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:19 crc kubenswrapper[4756]: I1124 13:18:19.066839 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:19 crc kubenswrapper[4756]: I1124 13:18:19.122903 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:19 crc kubenswrapper[4756]: I1124 13:18:19.949779 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:20 crc kubenswrapper[4756]: I1124 13:18:20.002386 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:18:21 crc kubenswrapper[4756]: I1124 13:18:21.911789 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gw84p" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="registry-server" containerID="cri-o://dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f" gracePeriod=2 Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.452119 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.651469 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7nw\" (UniqueName: \"kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw\") pod \"7c1e799b-ad89-464f-8392-f3740818e984\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.651616 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities\") pod \"7c1e799b-ad89-464f-8392-f3740818e984\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.651767 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content\") pod \"7c1e799b-ad89-464f-8392-f3740818e984\" (UID: \"7c1e799b-ad89-464f-8392-f3740818e984\") " Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.652869 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities" (OuterVolumeSpecName: "utilities") pod "7c1e799b-ad89-464f-8392-f3740818e984" (UID: "7c1e799b-ad89-464f-8392-f3740818e984"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.653482 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.659728 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw" (OuterVolumeSpecName: "kube-api-access-bh7nw") pod "7c1e799b-ad89-464f-8392-f3740818e984" (UID: "7c1e799b-ad89-464f-8392-f3740818e984"). InnerVolumeSpecName "kube-api-access-bh7nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.670968 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c1e799b-ad89-464f-8392-f3740818e984" (UID: "7c1e799b-ad89-464f-8392-f3740818e984"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.755953 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7nw\" (UniqueName: \"kubernetes.io/projected/7c1e799b-ad89-464f-8392-f3740818e984-kube-api-access-bh7nw\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.756253 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c1e799b-ad89-464f-8392-f3740818e984-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.934128 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c1e799b-ad89-464f-8392-f3740818e984" containerID="dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f" exitCode=0 Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.934238 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerDied","Data":"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f"} Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.934306 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gw84p" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.934363 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gw84p" event={"ID":"7c1e799b-ad89-464f-8392-f3740818e984","Type":"ContainerDied","Data":"d4dde7da8461dd69c791d43a23af4595cbead935a9adda77bfa7e1115e85b0c8"} Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.934408 4756 scope.go:117] "RemoveContainer" containerID="dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f" Nov 24 13:18:22 crc kubenswrapper[4756]: I1124 13:18:22.985367 4756 scope.go:117] "RemoveContainer" containerID="0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.017713 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.029533 4756 scope.go:117] "RemoveContainer" containerID="6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.030591 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gw84p"] Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.075616 4756 scope.go:117] "RemoveContainer" containerID="dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f" Nov 24 13:18:23 crc kubenswrapper[4756]: E1124 13:18:23.076054 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f\": container with ID starting with dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f not found: ID does not exist" containerID="dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.076093 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f"} err="failed to get container status \"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f\": rpc error: code = NotFound desc = could not find container \"dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f\": container with ID starting with dd9c1938694a6a67eb7d51e0511eb99dac6a58b448dbf9e0e6d57b15bbea8a5f not found: ID does not exist" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.076121 4756 scope.go:117] "RemoveContainer" containerID="0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d" Nov 24 13:18:23 crc kubenswrapper[4756]: E1124 13:18:23.076556 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d\": container with ID starting with 0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d not found: ID does not exist" containerID="0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.076608 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d"} err="failed to get container status \"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d\": rpc error: code = NotFound desc = could not find container \"0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d\": container with ID starting with 0a930891eddac8f7422a5c1386b7dec673caa006bdaf0330e5a3e691ee285c2d not found: ID does not exist" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.076649 4756 scope.go:117] "RemoveContainer" containerID="6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501" Nov 24 13:18:23 crc kubenswrapper[4756]: E1124 13:18:23.076953 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501\": container with ID starting with 6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501 not found: ID does not exist" containerID="6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.076987 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501"} err="failed to get container status \"6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501\": rpc error: code = NotFound desc = could not find container \"6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501\": container with ID starting with 6b695b47c47cfdaa4aabd94f7a25c621aa0e7abb7ba276404acacb0092114501 not found: ID does not exist" Nov 24 13:18:23 crc kubenswrapper[4756]: I1124 13:18:23.477527 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:18:23 crc kubenswrapper[4756]: E1124 13:18:23.477943 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:18:24 crc kubenswrapper[4756]: I1124 13:18:24.490426 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c1e799b-ad89-464f-8392-f3740818e984" path="/var/lib/kubelet/pods/7c1e799b-ad89-464f-8392-f3740818e984/volumes" Nov 24 13:18:26 crc kubenswrapper[4756]: I1124 13:18:26.986584 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:27 crc kubenswrapper[4756]: I1124 13:18:27.070321 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:27 crc kubenswrapper[4756]: I1124 13:18:27.738971 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.001623 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-99jjh" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="registry-server" containerID="cri-o://61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7" gracePeriod=2 Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.557228 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.697014 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djpd\" (UniqueName: \"kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd\") pod \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.697159 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content\") pod \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.702817 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities\") pod \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\" (UID: \"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3\") " Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.703547 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities" (OuterVolumeSpecName: "utilities") pod "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" (UID: "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.703860 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.705460 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd" (OuterVolumeSpecName: "kube-api-access-7djpd") pod "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" (UID: "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3"). InnerVolumeSpecName "kube-api-access-7djpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.806125 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djpd\" (UniqueName: \"kubernetes.io/projected/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-kube-api-access-7djpd\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.810686 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" (UID: "483b53a6-f7f6-40cb-afa0-da8ffb7d19d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:18:29 crc kubenswrapper[4756]: I1124 13:18:29.908658 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.013070 4756 generic.go:334] "Generic (PLEG): container finished" podID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerID="61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7" exitCode=0 Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.013121 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerDied","Data":"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7"} Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.013186 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99jjh" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.013234 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99jjh" event={"ID":"483b53a6-f7f6-40cb-afa0-da8ffb7d19d3","Type":"ContainerDied","Data":"62f49ca08831deeee04bd72fbe95f55c55e330eb6a84bc6c3a63169f0cfe6cff"} Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.013273 4756 scope.go:117] "RemoveContainer" containerID="61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.043692 4756 scope.go:117] "RemoveContainer" containerID="24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.051489 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.059879 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99jjh"] Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.075641 4756 scope.go:117] "RemoveContainer" containerID="aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.108761 4756 scope.go:117] "RemoveContainer" containerID="61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7" Nov 24 13:18:30 crc kubenswrapper[4756]: E1124 13:18:30.109159 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7\": container with ID starting with 61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7 not found: ID does not exist" containerID="61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.109222 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7"} err="failed to get container status \"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7\": rpc error: code = NotFound desc = could not find container \"61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7\": container with ID starting with 61b969c8b023971f6860fc7aedf88f2ca2f59b82e9f6ae4575d5048ff146cba7 not found: ID does not exist" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.109253 4756 scope.go:117] "RemoveContainer" containerID="24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f" Nov 24 13:18:30 crc kubenswrapper[4756]: E1124 13:18:30.109592 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f\": container with ID starting with 24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f not found: ID does not exist" containerID="24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.109622 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f"} err="failed to get container status \"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f\": rpc error: code = NotFound desc = could not find container \"24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f\": container with ID starting with 24a402e85e135eff398e3b2ce2a9101da4994eab77375effde7fd87c7df41c4f not found: ID does not exist" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.109669 4756 scope.go:117] "RemoveContainer" containerID="aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b" Nov 24 13:18:30 crc kubenswrapper[4756]: E1124 13:18:30.110025 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b\": container with ID starting with aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b not found: ID does not exist" containerID="aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.110069 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b"} err="failed to get container status \"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b\": rpc error: code = NotFound desc = could not find container \"aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b\": container with ID starting with aef4a6827b4b8b4b1d56ff0a59144e20b5c63ee4a9df07a51c26b4aa0d7b632b not found: ID does not exist" Nov 24 13:18:30 crc kubenswrapper[4756]: I1124 13:18:30.488465 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" path="/var/lib/kubelet/pods/483b53a6-f7f6-40cb-afa0-da8ffb7d19d3/volumes" Nov 24 13:18:37 crc kubenswrapper[4756]: I1124 13:18:37.476143 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:18:37 crc kubenswrapper[4756]: E1124 13:18:37.476824 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:18:51 crc kubenswrapper[4756]: I1124 13:18:51.475143 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:18:51 crc kubenswrapper[4756]: E1124 13:18:51.476069 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:19:05 crc kubenswrapper[4756]: I1124 13:19:05.476609 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:19:05 crc kubenswrapper[4756]: E1124 13:19:05.477415 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:19:17 crc kubenswrapper[4756]: I1124 13:19:17.475550 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:19:17 crc kubenswrapper[4756]: E1124 13:19:17.476354 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:19:20 crc kubenswrapper[4756]: I1124 13:19:20.289339 4756 scope.go:117] "RemoveContainer" containerID="417c90913734ad1f4189736f6a854027922aa54fd57be9bb51fcf6ee0dca38ba" Nov 24 13:19:20 crc kubenswrapper[4756]: I1124 13:19:20.324612 4756 scope.go:117] "RemoveContainer" containerID="4a224661cc0c67d72af23104daefce078bdb225bf14e0d902784a79b3c8b4098" Nov 24 13:19:20 crc kubenswrapper[4756]: I1124 13:19:20.365049 4756 scope.go:117] "RemoveContainer" containerID="b3e00ea494f42e930dad1d552a24aec4f1668d612a6ba94b1e9a4af53efffab4" Nov 24 13:19:32 crc kubenswrapper[4756]: I1124 13:19:32.476376 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:19:32 crc kubenswrapper[4756]: E1124 13:19:32.479948 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:19:46 crc kubenswrapper[4756]: I1124 13:19:46.476021 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:19:46 crc kubenswrapper[4756]: E1124 13:19:46.476677 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:20:01 crc kubenswrapper[4756]: I1124 13:20:01.476907 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:20:01 crc kubenswrapper[4756]: E1124 13:20:01.478038 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:20:16 crc kubenswrapper[4756]: I1124 13:20:16.476252 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:20:16 crc kubenswrapper[4756]: E1124 13:20:16.477606 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:20:27 crc kubenswrapper[4756]: I1124 13:20:27.476682 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:20:27 crc kubenswrapper[4756]: E1124 13:20:27.477624 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:20:39 crc kubenswrapper[4756]: I1124 13:20:39.475385 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:20:39 crc kubenswrapper[4756]: E1124 13:20:39.476116 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:20:54 crc kubenswrapper[4756]: I1124 13:20:54.481538 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:20:54 crc kubenswrapper[4756]: E1124 13:20:54.488437 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:21:06 crc kubenswrapper[4756]: I1124 13:21:06.475889 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:21:06 crc kubenswrapper[4756]: E1124 13:21:06.477015 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:21:18 crc kubenswrapper[4756]: I1124 13:21:18.492055 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:21:18 crc kubenswrapper[4756]: E1124 13:21:18.493371 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:21:29 crc kubenswrapper[4756]: I1124 13:21:29.477255 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:21:29 crc kubenswrapper[4756]: E1124 13:21:29.478675 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:21:44 crc kubenswrapper[4756]: I1124 13:21:44.476684 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:21:44 crc kubenswrapper[4756]: E1124 13:21:44.478463 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:21:55 crc kubenswrapper[4756]: I1124 13:21:55.476017 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:21:55 crc kubenswrapper[4756]: E1124 13:21:55.476777 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:22:08 crc kubenswrapper[4756]: I1124 13:22:08.491736 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:22:08 crc kubenswrapper[4756]: I1124 13:22:08.891132 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285"} Nov 24 13:24:33 crc kubenswrapper[4756]: I1124 13:24:33.479196 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:24:33 crc kubenswrapper[4756]: I1124 13:24:33.479729 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:25:03 crc kubenswrapper[4756]: I1124 13:25:03.478865 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:25:03 crc kubenswrapper[4756]: I1124 13:25:03.479598 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:25:33 crc kubenswrapper[4756]: I1124 13:25:33.479422 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:25:33 crc kubenswrapper[4756]: I1124 13:25:33.480375 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:25:33 crc kubenswrapper[4756]: I1124 13:25:33.480448 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:25:33 crc kubenswrapper[4756]: I1124 13:25:33.481002 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:25:33 crc kubenswrapper[4756]: I1124 13:25:33.481083 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285" gracePeriod=600 Nov 24 13:25:34 crc kubenswrapper[4756]: I1124 13:25:34.146039 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285" exitCode=0 Nov 24 13:25:34 crc kubenswrapper[4756]: I1124 13:25:34.146191 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285"} Nov 24 13:25:34 crc kubenswrapper[4756]: I1124 13:25:34.146844 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d"} Nov 24 13:25:34 crc kubenswrapper[4756]: I1124 13:25:34.146901 4756 scope.go:117] "RemoveContainer" containerID="6b9d2bdde9e6abe33ee718e3e0709b32c9a4e75ac09eedb30ea62a1d2a25c622" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.253128 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254593 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="extract-content" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254628 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="extract-content" Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254644 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="extract-utilities" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254650 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="extract-utilities" Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254678 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="extract-utilities" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254706 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="extract-utilities" Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254725 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254731 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254745 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254750 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: E1124 13:26:21.254758 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="extract-content" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254764 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="extract-content" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254938 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c1e799b-ad89-464f-8392-f3740818e984" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.254952 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="483b53a6-f7f6-40cb-afa0-da8ffb7d19d3" containerName="registry-server" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.257691 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.264836 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.321338 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.321553 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96g75\" (UniqueName: \"kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.321690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.423408 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.423515 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96g75\" (UniqueName: \"kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.423554 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.424116 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.424375 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.446840 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96g75\" (UniqueName: \"kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75\") pod \"community-operators-gz9xp\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:21 crc kubenswrapper[4756]: I1124 13:26:21.579800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:22 crc kubenswrapper[4756]: I1124 13:26:22.156031 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:22 crc kubenswrapper[4756]: I1124 13:26:22.642382 4756 generic.go:334] "Generic (PLEG): container finished" podID="1785ac95-7008-49da-8c42-97e84c1f605e" containerID="4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16" exitCode=0 Nov 24 13:26:22 crc kubenswrapper[4756]: I1124 13:26:22.642447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerDied","Data":"4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16"} Nov 24 13:26:22 crc kubenswrapper[4756]: I1124 13:26:22.643651 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerStarted","Data":"ff016219e902a3ddd668ce6709fae1b20f3d41340264155a09708f28a80542ff"} Nov 24 13:26:22 crc kubenswrapper[4756]: I1124 13:26:22.644648 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:26:23 crc kubenswrapper[4756]: I1124 13:26:23.655054 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerStarted","Data":"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590"} Nov 24 13:26:24 crc kubenswrapper[4756]: I1124 13:26:24.665680 4756 generic.go:334] "Generic (PLEG): container finished" podID="1785ac95-7008-49da-8c42-97e84c1f605e" containerID="acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590" exitCode=0 Nov 24 13:26:24 crc kubenswrapper[4756]: I1124 13:26:24.665769 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerDied","Data":"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590"} Nov 24 13:26:25 crc kubenswrapper[4756]: I1124 13:26:25.679129 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerStarted","Data":"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c"} Nov 24 13:26:25 crc kubenswrapper[4756]: I1124 13:26:25.709054 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gz9xp" podStartSLOduration=2.275534885 podStartE2EDuration="4.709027388s" podCreationTimestamp="2025-11-24 13:26:21 +0000 UTC" firstStartedPulling="2025-11-24 13:26:22.64438123 +0000 UTC m=+3515.001895382" lastFinishedPulling="2025-11-24 13:26:25.077873743 +0000 UTC m=+3517.435387885" observedRunningTime="2025-11-24 13:26:25.698572332 +0000 UTC m=+3518.056086474" watchObservedRunningTime="2025-11-24 13:26:25.709027388 +0000 UTC m=+3518.066541540" Nov 24 13:26:31 crc kubenswrapper[4756]: I1124 13:26:31.580512 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:31 crc kubenswrapper[4756]: I1124 13:26:31.582550 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:31 crc kubenswrapper[4756]: I1124 13:26:31.656238 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:31 crc kubenswrapper[4756]: I1124 13:26:31.792884 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:31 crc kubenswrapper[4756]: I1124 13:26:31.898256 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:33 crc kubenswrapper[4756]: I1124 13:26:33.758721 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gz9xp" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="registry-server" containerID="cri-o://06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c" gracePeriod=2 Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.258047 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.409291 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96g75\" (UniqueName: \"kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75\") pod \"1785ac95-7008-49da-8c42-97e84c1f605e\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.409386 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities\") pod \"1785ac95-7008-49da-8c42-97e84c1f605e\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.409617 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content\") pod \"1785ac95-7008-49da-8c42-97e84c1f605e\" (UID: \"1785ac95-7008-49da-8c42-97e84c1f605e\") " Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.410276 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities" (OuterVolumeSpecName: "utilities") pod "1785ac95-7008-49da-8c42-97e84c1f605e" (UID: "1785ac95-7008-49da-8c42-97e84c1f605e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.415621 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75" (OuterVolumeSpecName: "kube-api-access-96g75") pod "1785ac95-7008-49da-8c42-97e84c1f605e" (UID: "1785ac95-7008-49da-8c42-97e84c1f605e"). InnerVolumeSpecName "kube-api-access-96g75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.465620 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1785ac95-7008-49da-8c42-97e84c1f605e" (UID: "1785ac95-7008-49da-8c42-97e84c1f605e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.512522 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96g75\" (UniqueName: \"kubernetes.io/projected/1785ac95-7008-49da-8c42-97e84c1f605e-kube-api-access-96g75\") on node \"crc\" DevicePath \"\"" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.512563 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.512579 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1785ac95-7008-49da-8c42-97e84c1f605e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.769382 4756 generic.go:334] "Generic (PLEG): container finished" podID="1785ac95-7008-49da-8c42-97e84c1f605e" containerID="06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c" exitCode=0 Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.769435 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerDied","Data":"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c"} Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.769480 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gz9xp" event={"ID":"1785ac95-7008-49da-8c42-97e84c1f605e","Type":"ContainerDied","Data":"ff016219e902a3ddd668ce6709fae1b20f3d41340264155a09708f28a80542ff"} Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.769495 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gz9xp" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.769507 4756 scope.go:117] "RemoveContainer" containerID="06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.803118 4756 scope.go:117] "RemoveContainer" containerID="acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.803389 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.813722 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gz9xp"] Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.831327 4756 scope.go:117] "RemoveContainer" containerID="4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.903328 4756 scope.go:117] "RemoveContainer" containerID="06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c" Nov 24 13:26:34 crc kubenswrapper[4756]: E1124 13:26:34.903820 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c\": container with ID starting with 06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c not found: ID does not exist" containerID="06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.903865 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c"} err="failed to get container status \"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c\": rpc error: code = NotFound desc = could not find container \"06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c\": container with ID starting with 06dbe86317bf3099b9e4fe6aa49ea81465f523ad5b8f0c159af0e93de5c62a5c not found: ID does not exist" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.903895 4756 scope.go:117] "RemoveContainer" containerID="acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590" Nov 24 13:26:34 crc kubenswrapper[4756]: E1124 13:26:34.904235 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590\": container with ID starting with acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590 not found: ID does not exist" containerID="acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.904263 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590"} err="failed to get container status \"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590\": rpc error: code = NotFound desc = could not find container \"acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590\": container with ID starting with acebe8213e1ada57c36a20d9f84ea548a56a10fe3a61590bbde7498553933590 not found: ID does not exist" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.904281 4756 scope.go:117] "RemoveContainer" containerID="4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16" Nov 24 13:26:34 crc kubenswrapper[4756]: E1124 13:26:34.904565 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16\": container with ID starting with 4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16 not found: ID does not exist" containerID="4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16" Nov 24 13:26:34 crc kubenswrapper[4756]: I1124 13:26:34.904599 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16"} err="failed to get container status \"4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16\": rpc error: code = NotFound desc = could not find container \"4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16\": container with ID starting with 4b86321cc2034b173083dd8f4d79ea493773df39f7bcf843a389670790e12f16 not found: ID does not exist" Nov 24 13:26:36 crc kubenswrapper[4756]: I1124 13:26:36.492423 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" path="/var/lib/kubelet/pods/1785ac95-7008-49da-8c42-97e84c1f605e/volumes" Nov 24 13:27:33 crc kubenswrapper[4756]: I1124 13:27:33.479130 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:27:33 crc kubenswrapper[4756]: I1124 13:27:33.479893 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.015070 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:27:52 crc kubenswrapper[4756]: E1124 13:27:52.017815 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="extract-utilities" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.018118 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="extract-utilities" Nov 24 13:27:52 crc kubenswrapper[4756]: E1124 13:27:52.018295 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="registry-server" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.018410 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="registry-server" Nov 24 13:27:52 crc kubenswrapper[4756]: E1124 13:27:52.020760 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="extract-content" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.020859 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="extract-content" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.021384 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1785ac95-7008-49da-8c42-97e84c1f605e" containerName="registry-server" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.025975 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.026314 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.093274 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.093546 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.093615 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvgk\" (UniqueName: \"kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.195398 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.195480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvgk\" (UniqueName: \"kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.195555 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.195920 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.196052 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.215943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvgk\" (UniqueName: \"kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk\") pod \"certified-operators-hrwcx\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.362440 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:27:52 crc kubenswrapper[4756]: I1124 13:27:52.851420 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:27:53 crc kubenswrapper[4756]: I1124 13:27:53.659664 4756 generic.go:334] "Generic (PLEG): container finished" podID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerID="676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d" exitCode=0 Nov 24 13:27:53 crc kubenswrapper[4756]: I1124 13:27:53.659751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerDied","Data":"676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d"} Nov 24 13:27:53 crc kubenswrapper[4756]: I1124 13:27:53.659971 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerStarted","Data":"80188478e62c39b3054caf9e97eacc1293eb91122ed18a79b0ae3d8651572d0e"} Nov 24 13:27:54 crc kubenswrapper[4756]: I1124 13:27:54.673959 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerStarted","Data":"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d"} Nov 24 13:27:55 crc kubenswrapper[4756]: I1124 13:27:55.682815 4756 generic.go:334] "Generic (PLEG): container finished" podID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerID="722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d" exitCode=0 Nov 24 13:27:55 crc kubenswrapper[4756]: I1124 13:27:55.682867 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerDied","Data":"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d"} Nov 24 13:27:56 crc kubenswrapper[4756]: I1124 13:27:56.692821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerStarted","Data":"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc"} Nov 24 13:27:56 crc kubenswrapper[4756]: I1124 13:27:56.717717 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hrwcx" podStartSLOduration=3.209461337 podStartE2EDuration="5.717700087s" podCreationTimestamp="2025-11-24 13:27:51 +0000 UTC" firstStartedPulling="2025-11-24 13:27:53.661818081 +0000 UTC m=+3606.019332253" lastFinishedPulling="2025-11-24 13:27:56.170056851 +0000 UTC m=+3608.527571003" observedRunningTime="2025-11-24 13:27:56.71404413 +0000 UTC m=+3609.071558342" watchObservedRunningTime="2025-11-24 13:27:56.717700087 +0000 UTC m=+3609.075214229" Nov 24 13:28:02 crc kubenswrapper[4756]: I1124 13:28:02.363146 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:02 crc kubenswrapper[4756]: I1124 13:28:02.364729 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:02 crc kubenswrapper[4756]: I1124 13:28:02.413587 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:02 crc kubenswrapper[4756]: I1124 13:28:02.819487 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:02 crc kubenswrapper[4756]: I1124 13:28:02.875936 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:28:03 crc kubenswrapper[4756]: I1124 13:28:03.479686 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:28:03 crc kubenswrapper[4756]: I1124 13:28:03.479742 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:28:04 crc kubenswrapper[4756]: I1124 13:28:04.772579 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hrwcx" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="registry-server" containerID="cri-o://2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc" gracePeriod=2 Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.354560 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.415626 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content\") pod \"a61269c4-4dc5-4675-886f-043dd6452c8d\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.415770 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities\") pod \"a61269c4-4dc5-4675-886f-043dd6452c8d\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.415867 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvgk\" (UniqueName: \"kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk\") pod \"a61269c4-4dc5-4675-886f-043dd6452c8d\" (UID: \"a61269c4-4dc5-4675-886f-043dd6452c8d\") " Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.417321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities" (OuterVolumeSpecName: "utilities") pod "a61269c4-4dc5-4675-886f-043dd6452c8d" (UID: "a61269c4-4dc5-4675-886f-043dd6452c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.423512 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk" (OuterVolumeSpecName: "kube-api-access-2nvgk") pod "a61269c4-4dc5-4675-886f-043dd6452c8d" (UID: "a61269c4-4dc5-4675-886f-043dd6452c8d"). InnerVolumeSpecName "kube-api-access-2nvgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.464418 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a61269c4-4dc5-4675-886f-043dd6452c8d" (UID: "a61269c4-4dc5-4675-886f-043dd6452c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.518456 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.518494 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61269c4-4dc5-4675-886f-043dd6452c8d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.518504 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvgk\" (UniqueName: \"kubernetes.io/projected/a61269c4-4dc5-4675-886f-043dd6452c8d-kube-api-access-2nvgk\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.786228 4756 generic.go:334] "Generic (PLEG): container finished" podID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerID="2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc" exitCode=0 Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.786299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerDied","Data":"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc"} Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.786349 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrwcx" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.786379 4756 scope.go:117] "RemoveContainer" containerID="2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.786364 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrwcx" event={"ID":"a61269c4-4dc5-4675-886f-043dd6452c8d","Type":"ContainerDied","Data":"80188478e62c39b3054caf9e97eacc1293eb91122ed18a79b0ae3d8651572d0e"} Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.828250 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.833218 4756 scope.go:117] "RemoveContainer" containerID="722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.835781 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hrwcx"] Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.864674 4756 scope.go:117] "RemoveContainer" containerID="676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.923026 4756 scope.go:117] "RemoveContainer" containerID="2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc" Nov 24 13:28:05 crc kubenswrapper[4756]: E1124 13:28:05.923570 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc\": container with ID starting with 2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc not found: ID does not exist" containerID="2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.923612 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc"} err="failed to get container status \"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc\": rpc error: code = NotFound desc = could not find container \"2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc\": container with ID starting with 2341b20a2e947db552bc4f014a69934357f4124f9eafac1f89fe69078cffc1fc not found: ID does not exist" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.923640 4756 scope.go:117] "RemoveContainer" containerID="722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d" Nov 24 13:28:05 crc kubenswrapper[4756]: E1124 13:28:05.923964 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d\": container with ID starting with 722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d not found: ID does not exist" containerID="722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.923988 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d"} err="failed to get container status \"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d\": rpc error: code = NotFound desc = could not find container \"722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d\": container with ID starting with 722bc4dc284005a52f1c2961fdbdd2bf0ef10f0c6db157c307af83d625bacc8d not found: ID does not exist" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.924003 4756 scope.go:117] "RemoveContainer" containerID="676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d" Nov 24 13:28:05 crc kubenswrapper[4756]: E1124 13:28:05.924506 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d\": container with ID starting with 676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d not found: ID does not exist" containerID="676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d" Nov 24 13:28:05 crc kubenswrapper[4756]: I1124 13:28:05.924547 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d"} err="failed to get container status \"676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d\": rpc error: code = NotFound desc = could not find container \"676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d\": container with ID starting with 676f9f133a92fbfe16def22e4bb1d2021adaca1b4e112c045da127de8f3dba5d not found: ID does not exist" Nov 24 13:28:06 crc kubenswrapper[4756]: I1124 13:28:06.489916 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" path="/var/lib/kubelet/pods/a61269c4-4dc5-4675-886f-043dd6452c8d/volumes" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.821040 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:27 crc kubenswrapper[4756]: E1124 13:28:27.822186 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="extract-utilities" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.822201 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="extract-utilities" Nov 24 13:28:27 crc kubenswrapper[4756]: E1124 13:28:27.822214 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="extract-content" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.822220 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="extract-content" Nov 24 13:28:27 crc kubenswrapper[4756]: E1124 13:28:27.822237 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="registry-server" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.822244 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="registry-server" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.822442 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61269c4-4dc5-4675-886f-043dd6452c8d" containerName="registry-server" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.824209 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.844297 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.884967 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.885147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwql\" (UniqueName: \"kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.885338 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.987554 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.987654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwql\" (UniqueName: \"kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.987815 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.988263 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:27 crc kubenswrapper[4756]: I1124 13:28:27.988259 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:28 crc kubenswrapper[4756]: I1124 13:28:28.017997 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwql\" (UniqueName: \"kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql\") pod \"redhat-marketplace-mqvf6\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:28 crc kubenswrapper[4756]: I1124 13:28:28.148951 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:28 crc kubenswrapper[4756]: I1124 13:28:28.666108 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:29 crc kubenswrapper[4756]: I1124 13:28:29.065440 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerID="000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47" exitCode=0 Nov 24 13:28:29 crc kubenswrapper[4756]: I1124 13:28:29.065526 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerDied","Data":"000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47"} Nov 24 13:28:29 crc kubenswrapper[4756]: I1124 13:28:29.065575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerStarted","Data":"2dd445a9876c436d8f8dc0f268b49c78fdd2d70b6070e29b02dcfe0298990ec8"} Nov 24 13:28:31 crc kubenswrapper[4756]: I1124 13:28:31.103668 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerID="6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3" exitCode=0 Nov 24 13:28:31 crc kubenswrapper[4756]: I1124 13:28:31.104531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerDied","Data":"6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3"} Nov 24 13:28:32 crc kubenswrapper[4756]: I1124 13:28:32.121303 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerStarted","Data":"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8"} Nov 24 13:28:32 crc kubenswrapper[4756]: I1124 13:28:32.157183 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mqvf6" podStartSLOduration=2.638415346 podStartE2EDuration="5.157133343s" podCreationTimestamp="2025-11-24 13:28:27 +0000 UTC" firstStartedPulling="2025-11-24 13:28:29.067868264 +0000 UTC m=+3641.425382416" lastFinishedPulling="2025-11-24 13:28:31.586586241 +0000 UTC m=+3643.944100413" observedRunningTime="2025-11-24 13:28:32.152512631 +0000 UTC m=+3644.510026783" watchObservedRunningTime="2025-11-24 13:28:32.157133343 +0000 UTC m=+3644.514647495" Nov 24 13:28:33 crc kubenswrapper[4756]: I1124 13:28:33.479369 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:28:33 crc kubenswrapper[4756]: I1124 13:28:33.479752 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:28:33 crc kubenswrapper[4756]: I1124 13:28:33.479825 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:28:33 crc kubenswrapper[4756]: I1124 13:28:33.481040 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:28:33 crc kubenswrapper[4756]: I1124 13:28:33.481133 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" gracePeriod=600 Nov 24 13:28:33 crc kubenswrapper[4756]: E1124 13:28:33.661040 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:28:34 crc kubenswrapper[4756]: I1124 13:28:34.151690 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" exitCode=0 Nov 24 13:28:34 crc kubenswrapper[4756]: I1124 13:28:34.151772 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d"} Nov 24 13:28:34 crc kubenswrapper[4756]: I1124 13:28:34.151849 4756 scope.go:117] "RemoveContainer" containerID="1525cecc0222ec0a10eaa070816189df590572cf560144a3645affe2d27a9285" Nov 24 13:28:34 crc kubenswrapper[4756]: I1124 13:28:34.152796 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:28:34 crc kubenswrapper[4756]: E1124 13:28:34.153724 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:28:38 crc kubenswrapper[4756]: I1124 13:28:38.149853 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:38 crc kubenswrapper[4756]: I1124 13:28:38.150392 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:38 crc kubenswrapper[4756]: I1124 13:28:38.193805 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:38 crc kubenswrapper[4756]: I1124 13:28:38.237725 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:38 crc kubenswrapper[4756]: I1124 13:28:38.436992 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.215741 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mqvf6" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="registry-server" containerID="cri-o://72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8" gracePeriod=2 Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.764377 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.909258 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwql\" (UniqueName: \"kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql\") pod \"6f308a3c-47c8-4973-af02-043b6a69fe15\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.909419 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content\") pod \"6f308a3c-47c8-4973-af02-043b6a69fe15\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.909549 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities\") pod \"6f308a3c-47c8-4973-af02-043b6a69fe15\" (UID: \"6f308a3c-47c8-4973-af02-043b6a69fe15\") " Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.911301 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities" (OuterVolumeSpecName: "utilities") pod "6f308a3c-47c8-4973-af02-043b6a69fe15" (UID: "6f308a3c-47c8-4973-af02-043b6a69fe15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.916294 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql" (OuterVolumeSpecName: "kube-api-access-6nwql") pod "6f308a3c-47c8-4973-af02-043b6a69fe15" (UID: "6f308a3c-47c8-4973-af02-043b6a69fe15"). InnerVolumeSpecName "kube-api-access-6nwql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:28:40 crc kubenswrapper[4756]: I1124 13:28:40.927994 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f308a3c-47c8-4973-af02-043b6a69fe15" (UID: "6f308a3c-47c8-4973-af02-043b6a69fe15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.011583 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.011618 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f308a3c-47c8-4973-af02-043b6a69fe15-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.011638 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nwql\" (UniqueName: \"kubernetes.io/projected/6f308a3c-47c8-4973-af02-043b6a69fe15-kube-api-access-6nwql\") on node \"crc\" DevicePath \"\"" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.231078 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerID="72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8" exitCode=0 Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.231128 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerDied","Data":"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8"} Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.231199 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqvf6" event={"ID":"6f308a3c-47c8-4973-af02-043b6a69fe15","Type":"ContainerDied","Data":"2dd445a9876c436d8f8dc0f268b49c78fdd2d70b6070e29b02dcfe0298990ec8"} Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.231223 4756 scope.go:117] "RemoveContainer" containerID="72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.231231 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqvf6" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.301004 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.302063 4756 scope.go:117] "RemoveContainer" containerID="6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.312539 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqvf6"] Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.336517 4756 scope.go:117] "RemoveContainer" containerID="000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.379851 4756 scope.go:117] "RemoveContainer" containerID="72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8" Nov 24 13:28:41 crc kubenswrapper[4756]: E1124 13:28:41.380513 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8\": container with ID starting with 72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8 not found: ID does not exist" containerID="72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.380591 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8"} err="failed to get container status \"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8\": rpc error: code = NotFound desc = could not find container \"72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8\": container with ID starting with 72fe2d9774e5a6f281a078d5dc3ab81237d47fce64766e879affa8215f4050b8 not found: ID does not exist" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.380625 4756 scope.go:117] "RemoveContainer" containerID="6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3" Nov 24 13:28:41 crc kubenswrapper[4756]: E1124 13:28:41.381004 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3\": container with ID starting with 6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3 not found: ID does not exist" containerID="6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.381048 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3"} err="failed to get container status \"6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3\": rpc error: code = NotFound desc = could not find container \"6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3\": container with ID starting with 6b4e7042a0dfa7f0e3da4d4565551a79ffc946a45c444acde1d14349aa9df2a3 not found: ID does not exist" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.381080 4756 scope.go:117] "RemoveContainer" containerID="000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47" Nov 24 13:28:41 crc kubenswrapper[4756]: E1124 13:28:41.381566 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47\": container with ID starting with 000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47 not found: ID does not exist" containerID="000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47" Nov 24 13:28:41 crc kubenswrapper[4756]: I1124 13:28:41.381594 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47"} err="failed to get container status \"000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47\": rpc error: code = NotFound desc = could not find container \"000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47\": container with ID starting with 000e35d892eaceb5af308d290147df598aef26075fa14347e11d72c7d2281c47 not found: ID does not exist" Nov 24 13:28:42 crc kubenswrapper[4756]: I1124 13:28:42.487385 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" path="/var/lib/kubelet/pods/6f308a3c-47c8-4973-af02-043b6a69fe15/volumes" Nov 24 13:28:45 crc kubenswrapper[4756]: I1124 13:28:45.475584 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:28:45 crc kubenswrapper[4756]: E1124 13:28:45.476229 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:29:00 crc kubenswrapper[4756]: I1124 13:29:00.476146 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:29:00 crc kubenswrapper[4756]: E1124 13:29:00.476990 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:29:11 crc kubenswrapper[4756]: I1124 13:29:11.475537 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:29:11 crc kubenswrapper[4756]: E1124 13:29:11.476273 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:29:24 crc kubenswrapper[4756]: I1124 13:29:24.476546 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:29:24 crc kubenswrapper[4756]: E1124 13:29:24.477623 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.168967 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:29:29 crc kubenswrapper[4756]: E1124 13:29:29.169573 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="extract-content" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.169584 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="extract-content" Nov 24 13:29:29 crc kubenswrapper[4756]: E1124 13:29:29.169601 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="registry-server" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.169608 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="registry-server" Nov 24 13:29:29 crc kubenswrapper[4756]: E1124 13:29:29.169624 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="extract-utilities" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.169631 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="extract-utilities" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.169830 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f308a3c-47c8-4973-af02-043b6a69fe15" containerName="registry-server" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.171355 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.198363 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.231636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.231705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsn46\" (UniqueName: \"kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.231759 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.333073 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsn46\" (UniqueName: \"kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.333175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.333366 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.333739 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.333837 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.358876 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsn46\" (UniqueName: \"kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46\") pod \"redhat-operators-9jmd6\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:29 crc kubenswrapper[4756]: I1124 13:29:29.494670 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:30 crc kubenswrapper[4756]: I1124 13:29:30.143964 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:29:30 crc kubenswrapper[4756]: I1124 13:29:30.791134 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerID="4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186" exitCode=0 Nov 24 13:29:30 crc kubenswrapper[4756]: I1124 13:29:30.791226 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerDied","Data":"4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186"} Nov 24 13:29:30 crc kubenswrapper[4756]: I1124 13:29:30.791537 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerStarted","Data":"383c3532307e9f70a341ffd63aea8e31081a8256aab292d7b081e275ea7eb27d"} Nov 24 13:29:30 crc kubenswrapper[4756]: I1124 13:29:30.971057 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8kp9p" Nov 24 13:29:32 crc kubenswrapper[4756]: I1124 13:29:32.817229 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerStarted","Data":"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80"} Nov 24 13:29:37 crc kubenswrapper[4756]: I1124 13:29:37.872830 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerID="6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80" exitCode=0 Nov 24 13:29:37 crc kubenswrapper[4756]: I1124 13:29:37.872891 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerDied","Data":"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80"} Nov 24 13:29:38 crc kubenswrapper[4756]: I1124 13:29:38.889524 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerStarted","Data":"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4"} Nov 24 13:29:38 crc kubenswrapper[4756]: I1124 13:29:38.906453 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jmd6" podStartSLOduration=2.449515304 podStartE2EDuration="9.906430663s" podCreationTimestamp="2025-11-24 13:29:29 +0000 UTC" firstStartedPulling="2025-11-24 13:29:30.792738568 +0000 UTC m=+3703.150252710" lastFinishedPulling="2025-11-24 13:29:38.249653887 +0000 UTC m=+3710.607168069" observedRunningTime="2025-11-24 13:29:38.904396429 +0000 UTC m=+3711.261910581" watchObservedRunningTime="2025-11-24 13:29:38.906430663 +0000 UTC m=+3711.263944815" Nov 24 13:29:39 crc kubenswrapper[4756]: I1124 13:29:39.475488 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:29:39 crc kubenswrapper[4756]: E1124 13:29:39.476070 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:29:39 crc kubenswrapper[4756]: I1124 13:29:39.495710 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:39 crc kubenswrapper[4756]: I1124 13:29:39.495771 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:29:40 crc kubenswrapper[4756]: I1124 13:29:40.545251 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jmd6" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" probeResult="failure" output=< Nov 24 13:29:40 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:29:40 crc kubenswrapper[4756]: > Nov 24 13:29:50 crc kubenswrapper[4756]: I1124 13:29:50.541390 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jmd6" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" probeResult="failure" output=< Nov 24 13:29:50 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:29:50 crc kubenswrapper[4756]: > Nov 24 13:29:54 crc kubenswrapper[4756]: I1124 13:29:54.475919 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:29:54 crc kubenswrapper[4756]: E1124 13:29:54.476715 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.192947 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm"] Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.195199 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.200636 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.201964 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.213938 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm"] Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.307002 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.307054 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85lq\" (UniqueName: \"kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.307077 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.409319 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.409675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.409940 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85lq\" (UniqueName: \"kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.410290 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.421533 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.433838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85lq\" (UniqueName: \"kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq\") pod \"collect-profiles-29399850-tjrcm\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.523629 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:00 crc kubenswrapper[4756]: I1124 13:30:00.557928 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jmd6" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" probeResult="failure" output=< Nov 24 13:30:00 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:30:00 crc kubenswrapper[4756]: > Nov 24 13:30:01 crc kubenswrapper[4756]: I1124 13:30:01.021013 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm"] Nov 24 13:30:01 crc kubenswrapper[4756]: I1124 13:30:01.125834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" event={"ID":"9905eb6c-81a9-4461-8a07-51a77fb15cba","Type":"ContainerStarted","Data":"67cb960ec7e874abe59e6defd4105d24b81c3a0f8ce8b07b4abf2ec7cb317b0f"} Nov 24 13:30:02 crc kubenswrapper[4756]: I1124 13:30:02.140758 4756 generic.go:334] "Generic (PLEG): container finished" podID="9905eb6c-81a9-4461-8a07-51a77fb15cba" containerID="663388429c5072935c2751c7aef33edc8a889da89209bea3d5f428c7592c92f3" exitCode=0 Nov 24 13:30:02 crc kubenswrapper[4756]: I1124 13:30:02.141268 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" event={"ID":"9905eb6c-81a9-4461-8a07-51a77fb15cba","Type":"ContainerDied","Data":"663388429c5072935c2751c7aef33edc8a889da89209bea3d5f428c7592c92f3"} Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.730498 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.797234 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume\") pod \"9905eb6c-81a9-4461-8a07-51a77fb15cba\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.797389 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume\") pod \"9905eb6c-81a9-4461-8a07-51a77fb15cba\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.798017 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume" (OuterVolumeSpecName: "config-volume") pod "9905eb6c-81a9-4461-8a07-51a77fb15cba" (UID: "9905eb6c-81a9-4461-8a07-51a77fb15cba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.820852 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9905eb6c-81a9-4461-8a07-51a77fb15cba" (UID: "9905eb6c-81a9-4461-8a07-51a77fb15cba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.898881 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85lq\" (UniqueName: \"kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq\") pod \"9905eb6c-81a9-4461-8a07-51a77fb15cba\" (UID: \"9905eb6c-81a9-4461-8a07-51a77fb15cba\") " Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.899643 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9905eb6c-81a9-4461-8a07-51a77fb15cba-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.899666 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9905eb6c-81a9-4461-8a07-51a77fb15cba-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:03 crc kubenswrapper[4756]: I1124 13:30:03.907420 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq" (OuterVolumeSpecName: "kube-api-access-r85lq") pod "9905eb6c-81a9-4461-8a07-51a77fb15cba" (UID: "9905eb6c-81a9-4461-8a07-51a77fb15cba"). InnerVolumeSpecName "kube-api-access-r85lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.001864 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85lq\" (UniqueName: \"kubernetes.io/projected/9905eb6c-81a9-4461-8a07-51a77fb15cba-kube-api-access-r85lq\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.163219 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" event={"ID":"9905eb6c-81a9-4461-8a07-51a77fb15cba","Type":"ContainerDied","Data":"67cb960ec7e874abe59e6defd4105d24b81c3a0f8ce8b07b4abf2ec7cb317b0f"} Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.163281 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67cb960ec7e874abe59e6defd4105d24b81c3a0f8ce8b07b4abf2ec7cb317b0f" Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.163976 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399850-tjrcm" Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.825222 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl"] Nov 24 13:30:04 crc kubenswrapper[4756]: I1124 13:30:04.836869 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399805-t6tkl"] Nov 24 13:30:06 crc kubenswrapper[4756]: I1124 13:30:06.482805 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:30:06 crc kubenswrapper[4756]: E1124 13:30:06.483344 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:30:06 crc kubenswrapper[4756]: I1124 13:30:06.495763 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2fe035-f2da-4a24-9796-1bdeb6198091" path="/var/lib/kubelet/pods/6d2fe035-f2da-4a24-9796-1bdeb6198091/volumes" Nov 24 13:30:10 crc kubenswrapper[4756]: I1124 13:30:10.550688 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jmd6" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" probeResult="failure" output=< Nov 24 13:30:10 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:30:10 crc kubenswrapper[4756]: > Nov 24 13:30:19 crc kubenswrapper[4756]: I1124 13:30:19.543952 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:30:19 crc kubenswrapper[4756]: I1124 13:30:19.596717 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:30:19 crc kubenswrapper[4756]: I1124 13:30:19.774199 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:30:20 crc kubenswrapper[4756]: I1124 13:30:20.745383 4756 scope.go:117] "RemoveContainer" containerID="0240db7d1e843641993b9faa1aecf00bcf4ab4fbdea6e2b1f6ff60793b9fc48c" Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.325086 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jmd6" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" containerID="cri-o://82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4" gracePeriod=2 Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.476694 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:30:21 crc kubenswrapper[4756]: E1124 13:30:21.477485 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.857964 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.906235 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsn46\" (UniqueName: \"kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46\") pod \"6f079afa-a62c-4f33-8f8c-61853a77c73d\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.906399 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content\") pod \"6f079afa-a62c-4f33-8f8c-61853a77c73d\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.906472 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities\") pod \"6f079afa-a62c-4f33-8f8c-61853a77c73d\" (UID: \"6f079afa-a62c-4f33-8f8c-61853a77c73d\") " Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.908063 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities" (OuterVolumeSpecName: "utilities") pod "6f079afa-a62c-4f33-8f8c-61853a77c73d" (UID: "6f079afa-a62c-4f33-8f8c-61853a77c73d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.930583 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46" (OuterVolumeSpecName: "kube-api-access-nsn46") pod "6f079afa-a62c-4f33-8f8c-61853a77c73d" (UID: "6f079afa-a62c-4f33-8f8c-61853a77c73d"). InnerVolumeSpecName "kube-api-access-nsn46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:30:21 crc kubenswrapper[4756]: I1124 13:30:21.997945 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f079afa-a62c-4f33-8f8c-61853a77c73d" (UID: "6f079afa-a62c-4f33-8f8c-61853a77c73d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.008285 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.008410 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f079afa-a62c-4f33-8f8c-61853a77c73d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.008543 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsn46\" (UniqueName: \"kubernetes.io/projected/6f079afa-a62c-4f33-8f8c-61853a77c73d-kube-api-access-nsn46\") on node \"crc\" DevicePath \"\"" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.338892 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerID="82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4" exitCode=0 Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.338951 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jmd6" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.338962 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerDied","Data":"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4"} Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.339060 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jmd6" event={"ID":"6f079afa-a62c-4f33-8f8c-61853a77c73d","Type":"ContainerDied","Data":"383c3532307e9f70a341ffd63aea8e31081a8256aab292d7b081e275ea7eb27d"} Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.339096 4756 scope.go:117] "RemoveContainer" containerID="82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.370531 4756 scope.go:117] "RemoveContainer" containerID="6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.383362 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.400149 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jmd6"] Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.423058 4756 scope.go:117] "RemoveContainer" containerID="4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.469552 4756 scope.go:117] "RemoveContainer" containerID="82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4" Nov 24 13:30:22 crc kubenswrapper[4756]: E1124 13:30:22.469955 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4\": container with ID starting with 82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4 not found: ID does not exist" containerID="82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.469991 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4"} err="failed to get container status \"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4\": rpc error: code = NotFound desc = could not find container \"82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4\": container with ID starting with 82d6e092105697712de47236902b03e37cf998e45f3f6f569798737248a1feb4 not found: ID does not exist" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.470016 4756 scope.go:117] "RemoveContainer" containerID="6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80" Nov 24 13:30:22 crc kubenswrapper[4756]: E1124 13:30:22.470455 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80\": container with ID starting with 6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80 not found: ID does not exist" containerID="6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.470476 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80"} err="failed to get container status \"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80\": rpc error: code = NotFound desc = could not find container \"6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80\": container with ID starting with 6aaa7fd088e7ffb1bcc59229af88f874ee72702bfb93bc4bc750a53a14073e80 not found: ID does not exist" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.470491 4756 scope.go:117] "RemoveContainer" containerID="4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186" Nov 24 13:30:22 crc kubenswrapper[4756]: E1124 13:30:22.470944 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186\": container with ID starting with 4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186 not found: ID does not exist" containerID="4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.470984 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186"} err="failed to get container status \"4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186\": rpc error: code = NotFound desc = could not find container \"4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186\": container with ID starting with 4ffafb0a21f1a55c3463c90ae43fac3e484762f15754ee27f47532bdfae58186 not found: ID does not exist" Nov 24 13:30:22 crc kubenswrapper[4756]: I1124 13:30:22.487697 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" path="/var/lib/kubelet/pods/6f079afa-a62c-4f33-8f8c-61853a77c73d/volumes" Nov 24 13:30:35 crc kubenswrapper[4756]: I1124 13:30:35.476445 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:30:35 crc kubenswrapper[4756]: E1124 13:30:35.477201 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:30:48 crc kubenswrapper[4756]: I1124 13:30:48.483777 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:30:48 crc kubenswrapper[4756]: E1124 13:30:48.484730 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:31:01 crc kubenswrapper[4756]: I1124 13:31:01.476233 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:31:01 crc kubenswrapper[4756]: E1124 13:31:01.476991 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:31:16 crc kubenswrapper[4756]: I1124 13:31:16.476572 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:31:16 crc kubenswrapper[4756]: E1124 13:31:16.478110 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:31:29 crc kubenswrapper[4756]: I1124 13:31:29.476099 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:31:29 crc kubenswrapper[4756]: E1124 13:31:29.476959 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:31:40 crc kubenswrapper[4756]: I1124 13:31:40.476487 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:31:40 crc kubenswrapper[4756]: E1124 13:31:40.477545 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:31:55 crc kubenswrapper[4756]: I1124 13:31:55.475801 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:31:55 crc kubenswrapper[4756]: E1124 13:31:55.476502 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:32:10 crc kubenswrapper[4756]: I1124 13:32:10.476635 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:32:10 crc kubenswrapper[4756]: E1124 13:32:10.477941 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:32:21 crc kubenswrapper[4756]: I1124 13:32:21.476972 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:32:21 crc kubenswrapper[4756]: E1124 13:32:21.478100 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:32:36 crc kubenswrapper[4756]: I1124 13:32:36.475941 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:32:36 crc kubenswrapper[4756]: E1124 13:32:36.477915 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:32:49 crc kubenswrapper[4756]: I1124 13:32:49.475783 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:32:49 crc kubenswrapper[4756]: E1124 13:32:49.476595 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:33:01 crc kubenswrapper[4756]: I1124 13:33:01.476970 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:33:01 crc kubenswrapper[4756]: E1124 13:33:01.477855 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:33:15 crc kubenswrapper[4756]: I1124 13:33:15.475424 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:33:15 crc kubenswrapper[4756]: E1124 13:33:15.476178 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:33:30 crc kubenswrapper[4756]: I1124 13:33:30.476836 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:33:30 crc kubenswrapper[4756]: E1124 13:33:30.477942 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:33:41 crc kubenswrapper[4756]: I1124 13:33:41.476538 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:33:42 crc kubenswrapper[4756]: I1124 13:33:42.526845 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca"} Nov 24 13:36:03 crc kubenswrapper[4756]: I1124 13:36:03.479241 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:36:03 crc kubenswrapper[4756]: I1124 13:36:03.479738 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:36:33 crc kubenswrapper[4756]: I1124 13:36:33.478925 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:36:33 crc kubenswrapper[4756]: I1124 13:36:33.479594 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:37:03 crc kubenswrapper[4756]: I1124 13:37:03.479144 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:37:03 crc kubenswrapper[4756]: I1124 13:37:03.480293 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:37:03 crc kubenswrapper[4756]: I1124 13:37:03.480411 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:37:03 crc kubenswrapper[4756]: I1124 13:37:03.481608 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:37:03 crc kubenswrapper[4756]: I1124 13:37:03.481742 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca" gracePeriod=600 Nov 24 13:37:04 crc kubenswrapper[4756]: I1124 13:37:04.621372 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca" exitCode=0 Nov 24 13:37:04 crc kubenswrapper[4756]: I1124 13:37:04.621445 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca"} Nov 24 13:37:04 crc kubenswrapper[4756]: I1124 13:37:04.621841 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a"} Nov 24 13:37:04 crc kubenswrapper[4756]: I1124 13:37:04.621881 4756 scope.go:117] "RemoveContainer" containerID="0ef7bb0bf0be1dc1eb0b8f53b1398f56997762a38c4e0449a02551f924c69b8d" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.571388 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:00 crc kubenswrapper[4756]: E1124 13:39:00.572338 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9905eb6c-81a9-4461-8a07-51a77fb15cba" containerName="collect-profiles" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572351 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9905eb6c-81a9-4461-8a07-51a77fb15cba" containerName="collect-profiles" Nov 24 13:39:00 crc kubenswrapper[4756]: E1124 13:39:00.572368 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="extract-utilities" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572374 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="extract-utilities" Nov 24 13:39:00 crc kubenswrapper[4756]: E1124 13:39:00.572392 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572399 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" Nov 24 13:39:00 crc kubenswrapper[4756]: E1124 13:39:00.572426 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="extract-content" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572431 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="extract-content" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572607 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f079afa-a62c-4f33-8f8c-61853a77c73d" containerName="registry-server" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.572625 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9905eb6c-81a9-4461-8a07-51a77fb15cba" containerName="collect-profiles" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.575965 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.589334 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.730065 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdr7\" (UniqueName: \"kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.730169 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.730451 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.832116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.832231 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdr7\" (UniqueName: \"kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.832280 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.832764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.832804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.852453 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdr7\" (UniqueName: \"kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7\") pod \"certified-operators-74kq2\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:00 crc kubenswrapper[4756]: I1124 13:39:00.895334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:02 crc kubenswrapper[4756]: I1124 13:39:02.421909 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:02 crc kubenswrapper[4756]: I1124 13:39:02.788608 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerStarted","Data":"8a83cca4c704469f53a42aa8289a4bfa4c6c08dac7cd6616ec8f99dbe7be1384"} Nov 24 13:39:03 crc kubenswrapper[4756]: I1124 13:39:03.801817 4756 generic.go:334] "Generic (PLEG): container finished" podID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerID="a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359" exitCode=0 Nov 24 13:39:03 crc kubenswrapper[4756]: I1124 13:39:03.801867 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerDied","Data":"a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359"} Nov 24 13:39:03 crc kubenswrapper[4756]: I1124 13:39:03.806489 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:39:05 crc kubenswrapper[4756]: I1124 13:39:05.824044 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerStarted","Data":"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f"} Nov 24 13:39:09 crc kubenswrapper[4756]: I1124 13:39:09.869848 4756 generic.go:334] "Generic (PLEG): container finished" podID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerID="60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f" exitCode=0 Nov 24 13:39:09 crc kubenswrapper[4756]: I1124 13:39:09.869948 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerDied","Data":"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f"} Nov 24 13:39:10 crc kubenswrapper[4756]: I1124 13:39:10.886385 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerStarted","Data":"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172"} Nov 24 13:39:10 crc kubenswrapper[4756]: I1124 13:39:10.895772 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:10 crc kubenswrapper[4756]: I1124 13:39:10.895841 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:10 crc kubenswrapper[4756]: I1124 13:39:10.919504 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74kq2" podStartSLOduration=4.167227977 podStartE2EDuration="10.919479225s" podCreationTimestamp="2025-11-24 13:39:00 +0000 UTC" firstStartedPulling="2025-11-24 13:39:03.806106045 +0000 UTC m=+4276.163620207" lastFinishedPulling="2025-11-24 13:39:10.558357303 +0000 UTC m=+4282.915871455" observedRunningTime="2025-11-24 13:39:10.90765227 +0000 UTC m=+4283.265166442" watchObservedRunningTime="2025-11-24 13:39:10.919479225 +0000 UTC m=+4283.276993377" Nov 24 13:39:11 crc kubenswrapper[4756]: I1124 13:39:11.941860 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-74kq2" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="registry-server" probeResult="failure" output=< Nov 24 13:39:11 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:39:11 crc kubenswrapper[4756]: > Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.351984 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.355047 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.363084 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.453634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.453707 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.453793 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwzw\" (UniqueName: \"kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.556210 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.556338 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwzw\" (UniqueName: \"kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.556475 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.556825 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.556935 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.594783 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwzw\" (UniqueName: \"kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw\") pod \"redhat-marketplace-sc96p\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:16 crc kubenswrapper[4756]: I1124 13:39:16.680061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:17 crc kubenswrapper[4756]: I1124 13:39:17.218355 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:17 crc kubenswrapper[4756]: I1124 13:39:17.976258 4756 generic.go:334] "Generic (PLEG): container finished" podID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerID="33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e" exitCode=0 Nov 24 13:39:17 crc kubenswrapper[4756]: I1124 13:39:17.976301 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerDied","Data":"33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e"} Nov 24 13:39:17 crc kubenswrapper[4756]: I1124 13:39:17.976653 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerStarted","Data":"83e7ea1b0674407f54573ea8ad47b45fe58a7ab20e6b2af9c67816fbde5c13b2"} Nov 24 13:39:20 crc kubenswrapper[4756]: I1124 13:39:19.999642 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerStarted","Data":"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1"} Nov 24 13:39:21 crc kubenswrapper[4756]: I1124 13:39:21.018375 4756 generic.go:334] "Generic (PLEG): container finished" podID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerID="8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1" exitCode=0 Nov 24 13:39:21 crc kubenswrapper[4756]: I1124 13:39:21.018419 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerDied","Data":"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1"} Nov 24 13:39:21 crc kubenswrapper[4756]: I1124 13:39:21.239684 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:21 crc kubenswrapper[4756]: I1124 13:39:21.299482 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:22 crc kubenswrapper[4756]: I1124 13:39:22.031994 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerStarted","Data":"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011"} Nov 24 13:39:22 crc kubenswrapper[4756]: I1124 13:39:22.054951 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sc96p" podStartSLOduration=2.5653326290000003 podStartE2EDuration="6.054926913s" podCreationTimestamp="2025-11-24 13:39:16 +0000 UTC" firstStartedPulling="2025-11-24 13:39:17.978782518 +0000 UTC m=+4290.336296660" lastFinishedPulling="2025-11-24 13:39:21.468376802 +0000 UTC m=+4293.825890944" observedRunningTime="2025-11-24 13:39:22.048607635 +0000 UTC m=+4294.406121787" watchObservedRunningTime="2025-11-24 13:39:22.054926913 +0000 UTC m=+4294.412441085" Nov 24 13:39:23 crc kubenswrapper[4756]: I1124 13:39:23.340774 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:23 crc kubenswrapper[4756]: I1124 13:39:23.341583 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74kq2" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="registry-server" containerID="cri-o://9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172" gracePeriod=2 Nov 24 13:39:23 crc kubenswrapper[4756]: I1124 13:39:23.912006 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.053074 4756 generic.go:334] "Generic (PLEG): container finished" podID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerID="9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172" exitCode=0 Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.053114 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerDied","Data":"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172"} Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.053130 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74kq2" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.053209 4756 scope.go:117] "RemoveContainer" containerID="9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.053144 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74kq2" event={"ID":"276b62c5-c856-406a-b03a-f7c985a5d64c","Type":"ContainerDied","Data":"8a83cca4c704469f53a42aa8289a4bfa4c6c08dac7cd6616ec8f99dbe7be1384"} Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.082281 4756 scope.go:117] "RemoveContainer" containerID="60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.138467 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcdr7\" (UniqueName: \"kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7\") pod \"276b62c5-c856-406a-b03a-f7c985a5d64c\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.138520 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities\") pod \"276b62c5-c856-406a-b03a-f7c985a5d64c\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.138567 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content\") pod \"276b62c5-c856-406a-b03a-f7c985a5d64c\" (UID: \"276b62c5-c856-406a-b03a-f7c985a5d64c\") " Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.139767 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities" (OuterVolumeSpecName: "utilities") pod "276b62c5-c856-406a-b03a-f7c985a5d64c" (UID: "276b62c5-c856-406a-b03a-f7c985a5d64c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.145173 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7" (OuterVolumeSpecName: "kube-api-access-qcdr7") pod "276b62c5-c856-406a-b03a-f7c985a5d64c" (UID: "276b62c5-c856-406a-b03a-f7c985a5d64c"). InnerVolumeSpecName "kube-api-access-qcdr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.152641 4756 scope.go:117] "RemoveContainer" containerID="a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.190796 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "276b62c5-c856-406a-b03a-f7c985a5d64c" (UID: "276b62c5-c856-406a-b03a-f7c985a5d64c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.208178 4756 scope.go:117] "RemoveContainer" containerID="9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172" Nov 24 13:39:24 crc kubenswrapper[4756]: E1124 13:39:24.208884 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172\": container with ID starting with 9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172 not found: ID does not exist" containerID="9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.208913 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172"} err="failed to get container status \"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172\": rpc error: code = NotFound desc = could not find container \"9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172\": container with ID starting with 9d5715999309d269576d2eeb79271f9156632144fb4318cdddd0212832299172 not found: ID does not exist" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.208932 4756 scope.go:117] "RemoveContainer" containerID="60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f" Nov 24 13:39:24 crc kubenswrapper[4756]: E1124 13:39:24.209256 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f\": container with ID starting with 60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f not found: ID does not exist" containerID="60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.209280 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f"} err="failed to get container status \"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f\": rpc error: code = NotFound desc = could not find container \"60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f\": container with ID starting with 60d62a1bd987880e255b9874a2f70469339998919c062a1f2eb581b6d82ec06f not found: ID does not exist" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.209294 4756 scope.go:117] "RemoveContainer" containerID="a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359" Nov 24 13:39:24 crc kubenswrapper[4756]: E1124 13:39:24.209502 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359\": container with ID starting with a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359 not found: ID does not exist" containerID="a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.209523 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359"} err="failed to get container status \"a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359\": rpc error: code = NotFound desc = could not find container \"a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359\": container with ID starting with a42f60b8c9a5f0a0ec44bd786fc217c590b016c01f8b9b7d291ca3fb5c8bc359 not found: ID does not exist" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.241821 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcdr7\" (UniqueName: \"kubernetes.io/projected/276b62c5-c856-406a-b03a-f7c985a5d64c-kube-api-access-qcdr7\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.241895 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.241914 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/276b62c5-c856-406a-b03a-f7c985a5d64c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.426033 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.438559 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74kq2"] Nov 24 13:39:24 crc kubenswrapper[4756]: I1124 13:39:24.491087 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" path="/var/lib/kubelet/pods/276b62c5-c856-406a-b03a-f7c985a5d64c/volumes" Nov 24 13:39:24 crc kubenswrapper[4756]: E1124 13:39:24.578204 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276b62c5_c856_406a_b03a_f7c985a5d64c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276b62c5_c856_406a_b03a_f7c985a5d64c.slice/crio-8a83cca4c704469f53a42aa8289a4bfa4c6c08dac7cd6616ec8f99dbe7be1384\": RecentStats: unable to find data in memory cache]" Nov 24 13:39:26 crc kubenswrapper[4756]: I1124 13:39:26.681358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:26 crc kubenswrapper[4756]: I1124 13:39:26.681914 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:26 crc kubenswrapper[4756]: I1124 13:39:26.744610 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:27 crc kubenswrapper[4756]: I1124 13:39:27.140600 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:27 crc kubenswrapper[4756]: I1124 13:39:27.925897 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.101822 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sc96p" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="registry-server" containerID="cri-o://c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011" gracePeriod=2 Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.674140 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.749937 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwzw\" (UniqueName: \"kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw\") pod \"65e96633-baf8-4e27-943d-9b5a12d905f4\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.750171 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content\") pod \"65e96633-baf8-4e27-943d-9b5a12d905f4\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.750280 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities\") pod \"65e96633-baf8-4e27-943d-9b5a12d905f4\" (UID: \"65e96633-baf8-4e27-943d-9b5a12d905f4\") " Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.751074 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities" (OuterVolumeSpecName: "utilities") pod "65e96633-baf8-4e27-943d-9b5a12d905f4" (UID: "65e96633-baf8-4e27-943d-9b5a12d905f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.756857 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw" (OuterVolumeSpecName: "kube-api-access-4pwzw") pod "65e96633-baf8-4e27-943d-9b5a12d905f4" (UID: "65e96633-baf8-4e27-943d-9b5a12d905f4"). InnerVolumeSpecName "kube-api-access-4pwzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.757596 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.776717 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65e96633-baf8-4e27-943d-9b5a12d905f4" (UID: "65e96633-baf8-4e27-943d-9b5a12d905f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.860439 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e96633-baf8-4e27-943d-9b5a12d905f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:29 crc kubenswrapper[4756]: I1124 13:39:29.860491 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwzw\" (UniqueName: \"kubernetes.io/projected/65e96633-baf8-4e27-943d-9b5a12d905f4-kube-api-access-4pwzw\") on node \"crc\" DevicePath \"\"" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.114042 4756 generic.go:334] "Generic (PLEG): container finished" podID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerID="c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011" exitCode=0 Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.114092 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerDied","Data":"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011"} Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.114131 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc96p" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.114525 4756 scope.go:117] "RemoveContainer" containerID="c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.114509 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc96p" event={"ID":"65e96633-baf8-4e27-943d-9b5a12d905f4","Type":"ContainerDied","Data":"83e7ea1b0674407f54573ea8ad47b45fe58a7ab20e6b2af9c67816fbde5c13b2"} Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.148400 4756 scope.go:117] "RemoveContainer" containerID="8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.157632 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.168293 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc96p"] Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.175515 4756 scope.go:117] "RemoveContainer" containerID="33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.226461 4756 scope.go:117] "RemoveContainer" containerID="c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011" Nov 24 13:39:30 crc kubenswrapper[4756]: E1124 13:39:30.227075 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011\": container with ID starting with c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011 not found: ID does not exist" containerID="c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.227206 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011"} err="failed to get container status \"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011\": rpc error: code = NotFound desc = could not find container \"c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011\": container with ID starting with c4612b678a1f46499b02c3a7036af192afb7295decd764a570ca9939d0221011 not found: ID does not exist" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.227266 4756 scope.go:117] "RemoveContainer" containerID="8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1" Nov 24 13:39:30 crc kubenswrapper[4756]: E1124 13:39:30.229494 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1\": container with ID starting with 8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1 not found: ID does not exist" containerID="8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.229532 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1"} err="failed to get container status \"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1\": rpc error: code = NotFound desc = could not find container \"8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1\": container with ID starting with 8256949aa28834cb271bcb298422207cbec055e631eed0e41e0f76a3fcf9a5f1 not found: ID does not exist" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.229556 4756 scope.go:117] "RemoveContainer" containerID="33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e" Nov 24 13:39:30 crc kubenswrapper[4756]: E1124 13:39:30.229958 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e\": container with ID starting with 33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e not found: ID does not exist" containerID="33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.229990 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e"} err="failed to get container status \"33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e\": rpc error: code = NotFound desc = could not find container \"33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e\": container with ID starting with 33bda138bf331620212ec8715be352953bdcf1cbfc1ecaa3b67e13238a7fb70e not found: ID does not exist" Nov 24 13:39:30 crc kubenswrapper[4756]: I1124 13:39:30.487849 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" path="/var/lib/kubelet/pods/65e96633-baf8-4e27-943d-9b5a12d905f4/volumes" Nov 24 13:39:33 crc kubenswrapper[4756]: I1124 13:39:33.479585 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:39:33 crc kubenswrapper[4756]: I1124 13:39:33.480173 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:40:03 crc kubenswrapper[4756]: I1124 13:40:03.479142 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:40:03 crc kubenswrapper[4756]: I1124 13:40:03.479923 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.478926 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.479334 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.479373 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.479856 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.479909 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" gracePeriod=600 Nov 24 13:40:33 crc kubenswrapper[4756]: E1124 13:40:33.609915 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.767331 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" exitCode=0 Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.767384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a"} Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.767424 4756 scope.go:117] "RemoveContainer" containerID="13d533c1d4bf606f1d5184cbf6affbda6da4ed311cf226f0ef452c58fa7c58ca" Nov 24 13:40:33 crc kubenswrapper[4756]: I1124 13:40:33.768365 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:40:33 crc kubenswrapper[4756]: E1124 13:40:33.769074 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.932941 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.933894 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.933919 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.933975 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="extract-content" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.933987 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="extract-content" Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.934014 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="extract-content" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934024 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="extract-content" Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.934074 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934084 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.934098 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="extract-utilities" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934109 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="extract-utilities" Nov 24 13:40:35 crc kubenswrapper[4756]: E1124 13:40:35.934130 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="extract-utilities" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934141 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="extract-utilities" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934693 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e96633-baf8-4e27-943d-9b5a12d905f4" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.934748 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="276b62c5-c856-406a-b03a-f7c985a5d64c" containerName="registry-server" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.937143 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:35 crc kubenswrapper[4756]: I1124 13:40:35.944194 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.047177 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsg6f\" (UniqueName: \"kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.047570 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.047645 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.149350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.149429 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsg6f\" (UniqueName: \"kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.149522 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.149981 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.150021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.183032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsg6f\" (UniqueName: \"kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f\") pod \"redhat-operators-zbs2b\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.269733 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.785135 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:36 crc kubenswrapper[4756]: I1124 13:40:36.802380 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerStarted","Data":"ec353f192360f8732144b162233cc0415b8f44e2f23f8bc0b8f03d5a88bc537d"} Nov 24 13:40:37 crc kubenswrapper[4756]: I1124 13:40:37.818925 4756 generic.go:334] "Generic (PLEG): container finished" podID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerID="b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed" exitCode=0 Nov 24 13:40:37 crc kubenswrapper[4756]: I1124 13:40:37.818988 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerDied","Data":"b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed"} Nov 24 13:40:38 crc kubenswrapper[4756]: I1124 13:40:38.831494 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerStarted","Data":"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d"} Nov 24 13:40:41 crc kubenswrapper[4756]: I1124 13:40:41.866333 4756 generic.go:334] "Generic (PLEG): container finished" podID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerID="d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d" exitCode=0 Nov 24 13:40:41 crc kubenswrapper[4756]: I1124 13:40:41.867143 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerDied","Data":"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d"} Nov 24 13:40:42 crc kubenswrapper[4756]: I1124 13:40:42.877622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerStarted","Data":"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0"} Nov 24 13:40:42 crc kubenswrapper[4756]: I1124 13:40:42.894301 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbs2b" podStartSLOduration=3.477692106 podStartE2EDuration="7.894281485s" podCreationTimestamp="2025-11-24 13:40:35 +0000 UTC" firstStartedPulling="2025-11-24 13:40:37.821072549 +0000 UTC m=+4370.178586691" lastFinishedPulling="2025-11-24 13:40:42.237661928 +0000 UTC m=+4374.595176070" observedRunningTime="2025-11-24 13:40:42.8918179 +0000 UTC m=+4375.249332062" watchObservedRunningTime="2025-11-24 13:40:42.894281485 +0000 UTC m=+4375.251795627" Nov 24 13:40:45 crc kubenswrapper[4756]: I1124 13:40:45.476379 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:40:45 crc kubenswrapper[4756]: E1124 13:40:45.477246 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:40:46 crc kubenswrapper[4756]: I1124 13:40:46.270914 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:46 crc kubenswrapper[4756]: I1124 13:40:46.271417 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:47 crc kubenswrapper[4756]: I1124 13:40:47.333380 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbs2b" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="registry-server" probeResult="failure" output=< Nov 24 13:40:47 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 13:40:47 crc kubenswrapper[4756]: > Nov 24 13:40:56 crc kubenswrapper[4756]: I1124 13:40:56.336713 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:56 crc kubenswrapper[4756]: I1124 13:40:56.415603 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:56 crc kubenswrapper[4756]: I1124 13:40:56.582115 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:57 crc kubenswrapper[4756]: I1124 13:40:57.476238 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:40:57 crc kubenswrapper[4756]: E1124 13:40:57.476868 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.034482 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbs2b" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="registry-server" containerID="cri-o://45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0" gracePeriod=2 Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.503419 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.621887 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsg6f\" (UniqueName: \"kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f\") pod \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.622298 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content\") pod \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.622502 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities\") pod \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\" (UID: \"bb81bf1f-1772-4e1f-b5bc-6b53c271c755\") " Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.625328 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities" (OuterVolumeSpecName: "utilities") pod "bb81bf1f-1772-4e1f-b5bc-6b53c271c755" (UID: "bb81bf1f-1772-4e1f-b5bc-6b53c271c755"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.632642 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f" (OuterVolumeSpecName: "kube-api-access-tsg6f") pod "bb81bf1f-1772-4e1f-b5bc-6b53c271c755" (UID: "bb81bf1f-1772-4e1f-b5bc-6b53c271c755"). InnerVolumeSpecName "kube-api-access-tsg6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.715497 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb81bf1f-1772-4e1f-b5bc-6b53c271c755" (UID: "bb81bf1f-1772-4e1f-b5bc-6b53c271c755"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.724554 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsg6f\" (UniqueName: \"kubernetes.io/projected/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-kube-api-access-tsg6f\") on node \"crc\" DevicePath \"\"" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.724587 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:40:58 crc kubenswrapper[4756]: I1124 13:40:58.724600 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb81bf1f-1772-4e1f-b5bc-6b53c271c755-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.046765 4756 generic.go:334] "Generic (PLEG): container finished" podID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerID="45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0" exitCode=0 Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.046806 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerDied","Data":"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0"} Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.046831 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbs2b" event={"ID":"bb81bf1f-1772-4e1f-b5bc-6b53c271c755","Type":"ContainerDied","Data":"ec353f192360f8732144b162233cc0415b8f44e2f23f8bc0b8f03d5a88bc537d"} Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.046849 4756 scope.go:117] "RemoveContainer" containerID="45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.046882 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbs2b" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.091233 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.092587 4756 scope.go:117] "RemoveContainer" containerID="d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.100535 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbs2b"] Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.124364 4756 scope.go:117] "RemoveContainer" containerID="b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.169053 4756 scope.go:117] "RemoveContainer" containerID="45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0" Nov 24 13:40:59 crc kubenswrapper[4756]: E1124 13:40:59.169714 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0\": container with ID starting with 45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0 not found: ID does not exist" containerID="45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.169752 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0"} err="failed to get container status \"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0\": rpc error: code = NotFound desc = could not find container \"45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0\": container with ID starting with 45f26efc25209015500c56b632d9ad0b693b95b080beb35fe0cc9b83e83453b0 not found: ID does not exist" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.169771 4756 scope.go:117] "RemoveContainer" containerID="d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d" Nov 24 13:40:59 crc kubenswrapper[4756]: E1124 13:40:59.170173 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d\": container with ID starting with d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d not found: ID does not exist" containerID="d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.170201 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d"} err="failed to get container status \"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d\": rpc error: code = NotFound desc = could not find container \"d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d\": container with ID starting with d0fa3150dbaacb5eb0286a09d0fa062f0d00b83942308c2b6c7023ddbd19526d not found: ID does not exist" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.170245 4756 scope.go:117] "RemoveContainer" containerID="b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed" Nov 24 13:40:59 crc kubenswrapper[4756]: E1124 13:40:59.170513 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed\": container with ID starting with b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed not found: ID does not exist" containerID="b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed" Nov 24 13:40:59 crc kubenswrapper[4756]: I1124 13:40:59.170542 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed"} err="failed to get container status \"b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed\": rpc error: code = NotFound desc = could not find container \"b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed\": container with ID starting with b3d6c741b81b8be771d5232a05a57d558aaa3f499eb8a807bd724d917c391bed not found: ID does not exist" Nov 24 13:41:00 crc kubenswrapper[4756]: I1124 13:41:00.497794 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" path="/var/lib/kubelet/pods/bb81bf1f-1772-4e1f-b5bc-6b53c271c755/volumes" Nov 24 13:41:09 crc kubenswrapper[4756]: I1124 13:41:09.475774 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:41:09 crc kubenswrapper[4756]: E1124 13:41:09.477496 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:41:23 crc kubenswrapper[4756]: I1124 13:41:23.476320 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:41:23 crc kubenswrapper[4756]: E1124 13:41:23.477248 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:41:36 crc kubenswrapper[4756]: I1124 13:41:36.475508 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:41:36 crc kubenswrapper[4756]: E1124 13:41:36.476328 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:41:50 crc kubenswrapper[4756]: I1124 13:41:50.476004 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:41:50 crc kubenswrapper[4756]: E1124 13:41:50.477287 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:42:02 crc kubenswrapper[4756]: I1124 13:42:02.475134 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:42:02 crc kubenswrapper[4756]: E1124 13:42:02.475923 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:42:13 crc kubenswrapper[4756]: I1124 13:42:13.476485 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:42:13 crc kubenswrapper[4756]: E1124 13:42:13.477825 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:42:25 crc kubenswrapper[4756]: I1124 13:42:25.475670 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:42:25 crc kubenswrapper[4756]: E1124 13:42:25.477377 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.036682 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kgv96"] Nov 24 13:42:26 crc kubenswrapper[4756]: E1124 13:42:26.037108 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="extract-utilities" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.037131 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="extract-utilities" Nov 24 13:42:26 crc kubenswrapper[4756]: E1124 13:42:26.037179 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="registry-server" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.037186 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="registry-server" Nov 24 13:42:26 crc kubenswrapper[4756]: E1124 13:42:26.037205 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="extract-content" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.037212 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="extract-content" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.037441 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb81bf1f-1772-4e1f-b5bc-6b53c271c755" containerName="registry-server" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.039018 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.058734 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kgv96"] Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.087454 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-utilities\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.087639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-catalog-content\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.087716 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstvk\" (UniqueName: \"kubernetes.io/projected/2d65918b-b65c-46af-a1da-adcf27b5ac69-kube-api-access-dstvk\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.189510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-utilities\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.189594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-catalog-content\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.189631 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dstvk\" (UniqueName: \"kubernetes.io/projected/2d65918b-b65c-46af-a1da-adcf27b5ac69-kube-api-access-dstvk\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.190203 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-utilities\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.190226 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d65918b-b65c-46af-a1da-adcf27b5ac69-catalog-content\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.208496 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dstvk\" (UniqueName: \"kubernetes.io/projected/2d65918b-b65c-46af-a1da-adcf27b5ac69-kube-api-access-dstvk\") pod \"community-operators-kgv96\" (UID: \"2d65918b-b65c-46af-a1da-adcf27b5ac69\") " pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.379759 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.898259 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kgv96"] Nov 24 13:42:26 crc kubenswrapper[4756]: I1124 13:42:26.942425 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgv96" event={"ID":"2d65918b-b65c-46af-a1da-adcf27b5ac69","Type":"ContainerStarted","Data":"db76af584307e555dd4a4a98ef21a725330897bb1c484657b6e8829c7198774d"} Nov 24 13:42:27 crc kubenswrapper[4756]: I1124 13:42:27.955284 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d65918b-b65c-46af-a1da-adcf27b5ac69" containerID="2ad64e6c877326787b503b0946ed7c29bc72d071babcd6b9cf221148f97154e7" exitCode=0 Nov 24 13:42:27 crc kubenswrapper[4756]: I1124 13:42:27.955388 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgv96" event={"ID":"2d65918b-b65c-46af-a1da-adcf27b5ac69","Type":"ContainerDied","Data":"2ad64e6c877326787b503b0946ed7c29bc72d071babcd6b9cf221148f97154e7"} Nov 24 13:42:32 crc kubenswrapper[4756]: I1124 13:42:32.002045 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgv96" event={"ID":"2d65918b-b65c-46af-a1da-adcf27b5ac69","Type":"ContainerStarted","Data":"ad9f01170739028d111c4fcce3028f5f24408be5b9f2d3278164a1dbdc50aadc"} Nov 24 13:42:33 crc kubenswrapper[4756]: I1124 13:42:33.067916 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d65918b-b65c-46af-a1da-adcf27b5ac69" containerID="ad9f01170739028d111c4fcce3028f5f24408be5b9f2d3278164a1dbdc50aadc" exitCode=0 Nov 24 13:42:33 crc kubenswrapper[4756]: I1124 13:42:33.067990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgv96" event={"ID":"2d65918b-b65c-46af-a1da-adcf27b5ac69","Type":"ContainerDied","Data":"ad9f01170739028d111c4fcce3028f5f24408be5b9f2d3278164a1dbdc50aadc"} Nov 24 13:42:34 crc kubenswrapper[4756]: I1124 13:42:34.093682 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kgv96" event={"ID":"2d65918b-b65c-46af-a1da-adcf27b5ac69","Type":"ContainerStarted","Data":"3459b351a96634dcc8269bdfdba5a899f1561993f728691c40c65a544837928f"} Nov 24 13:42:34 crc kubenswrapper[4756]: I1124 13:42:34.126902 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kgv96" podStartSLOduration=2.568322139 podStartE2EDuration="8.126877655s" podCreationTimestamp="2025-11-24 13:42:26 +0000 UTC" firstStartedPulling="2025-11-24 13:42:27.960211113 +0000 UTC m=+4480.317725255" lastFinishedPulling="2025-11-24 13:42:33.518766619 +0000 UTC m=+4485.876280771" observedRunningTime="2025-11-24 13:42:34.123951267 +0000 UTC m=+4486.481465449" watchObservedRunningTime="2025-11-24 13:42:34.126877655 +0000 UTC m=+4486.484391797" Nov 24 13:42:36 crc kubenswrapper[4756]: I1124 13:42:36.380291 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:36 crc kubenswrapper[4756]: I1124 13:42:36.380645 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:36 crc kubenswrapper[4756]: I1124 13:42:36.655209 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:37 crc kubenswrapper[4756]: I1124 13:42:37.475807 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:42:37 crc kubenswrapper[4756]: E1124 13:42:37.476545 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:42:46 crc kubenswrapper[4756]: I1124 13:42:46.957961 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kgv96" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.028235 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kgv96"] Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.101263 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.101717 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5n297" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="registry-server" containerID="cri-o://47feca8e2a99e21abcf04d4465c90ec29f6da3f3d85b7494b927b25552e1c948" gracePeriod=2 Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.244287 4756 generic.go:334] "Generic (PLEG): container finished" podID="7edec971-1b8d-498b-a964-4a90bf59a403" containerID="47feca8e2a99e21abcf04d4465c90ec29f6da3f3d85b7494b927b25552e1c948" exitCode=0 Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.245310 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerDied","Data":"47feca8e2a99e21abcf04d4465c90ec29f6da3f3d85b7494b927b25552e1c948"} Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.679355 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n297" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.775271 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content\") pod \"7edec971-1b8d-498b-a964-4a90bf59a403\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.775528 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfddd\" (UniqueName: \"kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd\") pod \"7edec971-1b8d-498b-a964-4a90bf59a403\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.775665 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities\") pod \"7edec971-1b8d-498b-a964-4a90bf59a403\" (UID: \"7edec971-1b8d-498b-a964-4a90bf59a403\") " Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.776271 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities" (OuterVolumeSpecName: "utilities") pod "7edec971-1b8d-498b-a964-4a90bf59a403" (UID: "7edec971-1b8d-498b-a964-4a90bf59a403"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.787103 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd" (OuterVolumeSpecName: "kube-api-access-gfddd") pod "7edec971-1b8d-498b-a964-4a90bf59a403" (UID: "7edec971-1b8d-498b-a964-4a90bf59a403"). InnerVolumeSpecName "kube-api-access-gfddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.837253 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7edec971-1b8d-498b-a964-4a90bf59a403" (UID: "7edec971-1b8d-498b-a964-4a90bf59a403"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.878355 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfddd\" (UniqueName: \"kubernetes.io/projected/7edec971-1b8d-498b-a964-4a90bf59a403-kube-api-access-gfddd\") on node \"crc\" DevicePath \"\"" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.878393 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:42:47 crc kubenswrapper[4756]: I1124 13:42:47.878404 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7edec971-1b8d-498b-a964-4a90bf59a403-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.258513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5n297" event={"ID":"7edec971-1b8d-498b-a964-4a90bf59a403","Type":"ContainerDied","Data":"400b3f76bbcdfbb3f8b7d6878fccd080aa2407140712cc21f0d0eaf0bdf25877"} Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.258817 4756 scope.go:117] "RemoveContainer" containerID="47feca8e2a99e21abcf04d4465c90ec29f6da3f3d85b7494b927b25552e1c948" Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.258725 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5n297" Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.298521 4756 scope.go:117] "RemoveContainer" containerID="6a2b6d56e528e0a8a20de80e750864b6fcb6332f635cad2c5923b51da3a5f423" Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.305249 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.323083 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5n297"] Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.508377 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" path="/var/lib/kubelet/pods/7edec971-1b8d-498b-a964-4a90bf59a403/volumes" Nov 24 13:42:48 crc kubenswrapper[4756]: I1124 13:42:48.812138 4756 scope.go:117] "RemoveContainer" containerID="9e6cd67108a663eb5596901c05bb81f4b720c4ab766c434c12a0a8dee4f9a1ee" Nov 24 13:42:51 crc kubenswrapper[4756]: I1124 13:42:51.476363 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:42:51 crc kubenswrapper[4756]: E1124 13:42:51.483564 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:43:03 crc kubenswrapper[4756]: I1124 13:43:03.475610 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:43:03 crc kubenswrapper[4756]: E1124 13:43:03.476308 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:43:17 crc kubenswrapper[4756]: I1124 13:43:17.475701 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:43:17 crc kubenswrapper[4756]: E1124 13:43:17.476659 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:43:30 crc kubenswrapper[4756]: I1124 13:43:30.475758 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:43:30 crc kubenswrapper[4756]: E1124 13:43:30.476557 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:43:45 crc kubenswrapper[4756]: I1124 13:43:45.476197 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:43:45 crc kubenswrapper[4756]: E1124 13:43:45.476912 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:43:57 crc kubenswrapper[4756]: I1124 13:43:57.476416 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:43:57 crc kubenswrapper[4756]: E1124 13:43:57.477227 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:44:11 crc kubenswrapper[4756]: I1124 13:44:11.476447 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:44:11 crc kubenswrapper[4756]: E1124 13:44:11.477493 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:44:22 crc kubenswrapper[4756]: I1124 13:44:22.475901 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:44:22 crc kubenswrapper[4756]: E1124 13:44:22.477389 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:44:36 crc kubenswrapper[4756]: I1124 13:44:36.476304 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:44:36 crc kubenswrapper[4756]: E1124 13:44:36.477333 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:44:47 crc kubenswrapper[4756]: I1124 13:44:47.475756 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:44:47 crc kubenswrapper[4756]: E1124 13:44:47.476500 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.159213 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x"] Nov 24 13:45:00 crc kubenswrapper[4756]: E1124 13:45:00.160384 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="extract-content" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.160401 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="extract-content" Nov 24 13:45:00 crc kubenswrapper[4756]: E1124 13:45:00.160427 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="registry-server" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.160437 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="registry-server" Nov 24 13:45:00 crc kubenswrapper[4756]: E1124 13:45:00.160459 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="extract-utilities" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.160468 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="extract-utilities" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.160733 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7edec971-1b8d-498b-a964-4a90bf59a403" containerName="registry-server" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.161607 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.164498 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.166278 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.172703 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x"] Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.341722 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcq27\" (UniqueName: \"kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.342019 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.342080 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.443661 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.443723 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.443779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcq27\" (UniqueName: \"kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.444561 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.450689 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.468219 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcq27\" (UniqueName: \"kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27\") pod \"collect-profiles-29399865-vqh2x\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.486723 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:00 crc kubenswrapper[4756]: I1124 13:45:00.958562 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x"] Nov 24 13:45:01 crc kubenswrapper[4756]: I1124 13:45:01.476013 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:45:01 crc kubenswrapper[4756]: E1124 13:45:01.476324 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:45:01 crc kubenswrapper[4756]: I1124 13:45:01.621767 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" event={"ID":"ce09d8a5-450b-4bfe-93b7-1364841028cc","Type":"ContainerStarted","Data":"ce5048f51b9a938de53e641dc3ece57cb4c7e7331f02a7252f5e27f5b5b70334"} Nov 24 13:45:01 crc kubenswrapper[4756]: I1124 13:45:01.622109 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" event={"ID":"ce09d8a5-450b-4bfe-93b7-1364841028cc","Type":"ContainerStarted","Data":"e7f92d7348060187822e9913b0bdf9542611768226c7534e1abf7c659cdc1704"} Nov 24 13:45:01 crc kubenswrapper[4756]: I1124 13:45:01.642922 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" podStartSLOduration=1.642901054 podStartE2EDuration="1.642901054s" podCreationTimestamp="2025-11-24 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:45:01.635505457 +0000 UTC m=+4633.993019599" watchObservedRunningTime="2025-11-24 13:45:01.642901054 +0000 UTC m=+4634.000415196" Nov 24 13:45:02 crc kubenswrapper[4756]: I1124 13:45:02.633259 4756 generic.go:334] "Generic (PLEG): container finished" podID="ce09d8a5-450b-4bfe-93b7-1364841028cc" containerID="ce5048f51b9a938de53e641dc3ece57cb4c7e7331f02a7252f5e27f5b5b70334" exitCode=0 Nov 24 13:45:02 crc kubenswrapper[4756]: I1124 13:45:02.633298 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" event={"ID":"ce09d8a5-450b-4bfe-93b7-1364841028cc","Type":"ContainerDied","Data":"ce5048f51b9a938de53e641dc3ece57cb4c7e7331f02a7252f5e27f5b5b70334"} Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.156321 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.316880 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume\") pod \"ce09d8a5-450b-4bfe-93b7-1364841028cc\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.316932 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume\") pod \"ce09d8a5-450b-4bfe-93b7-1364841028cc\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.317109 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcq27\" (UniqueName: \"kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27\") pod \"ce09d8a5-450b-4bfe-93b7-1364841028cc\" (UID: \"ce09d8a5-450b-4bfe-93b7-1364841028cc\") " Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.317765 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce09d8a5-450b-4bfe-93b7-1364841028cc" (UID: "ce09d8a5-450b-4bfe-93b7-1364841028cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.323481 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce09d8a5-450b-4bfe-93b7-1364841028cc" (UID: "ce09d8a5-450b-4bfe-93b7-1364841028cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.324065 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27" (OuterVolumeSpecName: "kube-api-access-qcq27") pod "ce09d8a5-450b-4bfe-93b7-1364841028cc" (UID: "ce09d8a5-450b-4bfe-93b7-1364841028cc"). InnerVolumeSpecName "kube-api-access-qcq27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.420049 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce09d8a5-450b-4bfe-93b7-1364841028cc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.420082 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce09d8a5-450b-4bfe-93b7-1364841028cc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.420093 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcq27\" (UniqueName: \"kubernetes.io/projected/ce09d8a5-450b-4bfe-93b7-1364841028cc-kube-api-access-qcq27\") on node \"crc\" DevicePath \"\"" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.654114 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" event={"ID":"ce09d8a5-450b-4bfe-93b7-1364841028cc","Type":"ContainerDied","Data":"e7f92d7348060187822e9913b0bdf9542611768226c7534e1abf7c659cdc1704"} Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.654388 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f92d7348060187822e9913b0bdf9542611768226c7534e1abf7c659cdc1704" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.654220 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399865-vqh2x" Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.713566 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn"] Nov 24 13:45:04 crc kubenswrapper[4756]: I1124 13:45:04.721534 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399820-fcqzn"] Nov 24 13:45:06 crc kubenswrapper[4756]: I1124 13:45:06.488150 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2bf010-7ad0-430f-8a16-20dcfb150d38" path="/var/lib/kubelet/pods/ea2bf010-7ad0-430f-8a16-20dcfb150d38/volumes" Nov 24 13:45:16 crc kubenswrapper[4756]: I1124 13:45:16.475670 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:45:16 crc kubenswrapper[4756]: E1124 13:45:16.476824 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:45:21 crc kubenswrapper[4756]: I1124 13:45:21.201914 4756 scope.go:117] "RemoveContainer" containerID="e96ca971a28f4cd0759661f56ac934c75d6b41ffcb5a25de367a360f208d75ea" Nov 24 13:45:27 crc kubenswrapper[4756]: I1124 13:45:27.475926 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:45:27 crc kubenswrapper[4756]: E1124 13:45:27.476864 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:45:38 crc kubenswrapper[4756]: I1124 13:45:38.483871 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:45:38 crc kubenswrapper[4756]: I1124 13:45:38.973048 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552"} Nov 24 13:47:54 crc kubenswrapper[4756]: I1124 13:47:54.410120 4756 generic.go:334] "Generic (PLEG): container finished" podID="931a5dda-ad1f-4595-a5b8-3b1820afb648" containerID="40659af9dc86c4fbf7aaff48df1e1dcbeef33e3a1a23de6053348ba1586b5ce0" exitCode=0 Nov 24 13:47:54 crc kubenswrapper[4756]: I1124 13:47:54.410201 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"931a5dda-ad1f-4595-a5b8-3b1820afb648","Type":"ContainerDied","Data":"40659af9dc86c4fbf7aaff48df1e1dcbeef33e3a1a23de6053348ba1586b5ce0"} Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.870255 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.969954 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970273 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970405 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970534 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970597 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzllb\" (UniqueName: \"kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970620 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970649 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970686 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.970939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key\") pod \"931a5dda-ad1f-4595-a5b8-3b1820afb648\" (UID: \"931a5dda-ad1f-4595-a5b8-3b1820afb648\") " Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.971061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.971615 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data" (OuterVolumeSpecName: "config-data") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.972527 4756 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.977534 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb" (OuterVolumeSpecName: "kube-api-access-hzllb") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "kube-api-access-hzllb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:47:55 crc kubenswrapper[4756]: I1124 13:47:55.991114 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.012084 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.014099 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.014597 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.030105 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.060933 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "931a5dda-ad1f-4595-a5b8-3b1820afb648" (UID: "931a5dda-ad1f-4595-a5b8-3b1820afb648"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075002 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzllb\" (UniqueName: \"kubernetes.io/projected/931a5dda-ad1f-4595-a5b8-3b1820afb648-kube-api-access-hzllb\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075039 4756 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075051 4756 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/931a5dda-ad1f-4595-a5b8-3b1820afb648-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075068 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075081 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075091 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/931a5dda-ad1f-4595-a5b8-3b1820afb648-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075130 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.075143 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/931a5dda-ad1f-4595-a5b8-3b1820afb648-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.102298 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.177321 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.438045 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"931a5dda-ad1f-4595-a5b8-3b1820afb648","Type":"ContainerDied","Data":"55e3ca6f68bae72991e1e51b5d5b8841f0cec1f7195c8c7bbc4db116b06ae275"} Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.438084 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e3ca6f68bae72991e1e51b5d5b8841f0cec1f7195c8c7bbc4db116b06ae275" Nov 24 13:47:56 crc kubenswrapper[4756]: I1124 13:47:56.438134 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 13:48:03 crc kubenswrapper[4756]: I1124 13:48:03.479074 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:48:03 crc kubenswrapper[4756]: I1124 13:48:03.479886 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.226319 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:48:05 crc kubenswrapper[4756]: E1124 13:48:05.227219 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce09d8a5-450b-4bfe-93b7-1364841028cc" containerName="collect-profiles" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.227239 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce09d8a5-450b-4bfe-93b7-1364841028cc" containerName="collect-profiles" Nov 24 13:48:05 crc kubenswrapper[4756]: E1124 13:48:05.227271 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931a5dda-ad1f-4595-a5b8-3b1820afb648" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.227280 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="931a5dda-ad1f-4595-a5b8-3b1820afb648" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.227554 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="931a5dda-ad1f-4595-a5b8-3b1820afb648" containerName="tempest-tests-tempest-tests-runner" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.227575 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce09d8a5-450b-4bfe-93b7-1364841028cc" containerName="collect-profiles" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.228419 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.231990 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hwjbh" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.246467 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.393588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xndp\" (UniqueName: \"kubernetes.io/projected/11285260-11ac-42da-b521-1be38199040e-kube-api-access-4xndp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.393703 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.494904 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.495363 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xndp\" (UniqueName: \"kubernetes.io/projected/11285260-11ac-42da-b521-1be38199040e-kube-api-access-4xndp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.495539 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.526352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xndp\" (UniqueName: \"kubernetes.io/projected/11285260-11ac-42da-b521-1be38199040e-kube-api-access-4xndp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.554227 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11285260-11ac-42da-b521-1be38199040e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:05 crc kubenswrapper[4756]: I1124 13:48:05.574703 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 13:48:06 crc kubenswrapper[4756]: I1124 13:48:06.037576 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:48:06 crc kubenswrapper[4756]: I1124 13:48:06.053441 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 13:48:06 crc kubenswrapper[4756]: I1124 13:48:06.556447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"11285260-11ac-42da-b521-1be38199040e","Type":"ContainerStarted","Data":"c7022a43cf9a73386667c5602365fb65f981a28bbdc00d90b10270377c2aab24"} Nov 24 13:48:07 crc kubenswrapper[4756]: I1124 13:48:07.570848 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"11285260-11ac-42da-b521-1be38199040e","Type":"ContainerStarted","Data":"32262c455dae140812e59d514acc543eb3b2905d6fa2faf66460cc59040254e7"} Nov 24 13:48:07 crc kubenswrapper[4756]: I1124 13:48:07.597614 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6752861559999999 podStartE2EDuration="2.597581486s" podCreationTimestamp="2025-11-24 13:48:05 +0000 UTC" firstStartedPulling="2025-11-24 13:48:06.037300635 +0000 UTC m=+4818.394814777" lastFinishedPulling="2025-11-24 13:48:06.959595955 +0000 UTC m=+4819.317110107" observedRunningTime="2025-11-24 13:48:07.596680692 +0000 UTC m=+4819.954194854" watchObservedRunningTime="2025-11-24 13:48:07.597581486 +0000 UTC m=+4819.955095668" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.051877 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vqv5d/must-gather-48cgw"] Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.054126 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.058135 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vqv5d"/"openshift-service-ca.crt" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.058407 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vqv5d"/"kube-root-ca.crt" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.058560 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vqv5d"/"default-dockercfg-qwljm" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.068701 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vqv5d/must-gather-48cgw"] Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.245711 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.245909 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzphc\" (UniqueName: \"kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.347921 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzphc\" (UniqueName: \"kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.348011 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.348449 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.375313 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzphc\" (UniqueName: \"kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc\") pod \"must-gather-48cgw\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:31 crc kubenswrapper[4756]: I1124 13:48:31.672558 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:48:32 crc kubenswrapper[4756]: I1124 13:48:32.151018 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vqv5d/must-gather-48cgw"] Nov 24 13:48:32 crc kubenswrapper[4756]: I1124 13:48:32.835709 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/must-gather-48cgw" event={"ID":"163191c1-015e-4d7b-a7fb-982e125285ca","Type":"ContainerStarted","Data":"ab2d2bfeb93aba041825cc1014ce168a1553e4bdea1865c76202b84440537de9"} Nov 24 13:48:33 crc kubenswrapper[4756]: I1124 13:48:33.479628 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:48:33 crc kubenswrapper[4756]: I1124 13:48:33.479715 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:48:38 crc kubenswrapper[4756]: I1124 13:48:38.893246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/must-gather-48cgw" event={"ID":"163191c1-015e-4d7b-a7fb-982e125285ca","Type":"ContainerStarted","Data":"e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85"} Nov 24 13:48:38 crc kubenswrapper[4756]: I1124 13:48:38.893906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/must-gather-48cgw" event={"ID":"163191c1-015e-4d7b-a7fb-982e125285ca","Type":"ContainerStarted","Data":"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e"} Nov 24 13:48:38 crc kubenswrapper[4756]: I1124 13:48:38.927784 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vqv5d/must-gather-48cgw" podStartSLOduration=2.280717807 podStartE2EDuration="7.927752509s" podCreationTimestamp="2025-11-24 13:48:31 +0000 UTC" firstStartedPulling="2025-11-24 13:48:32.158193848 +0000 UTC m=+4844.515707990" lastFinishedPulling="2025-11-24 13:48:37.80522855 +0000 UTC m=+4850.162742692" observedRunningTime="2025-11-24 13:48:38.910729826 +0000 UTC m=+4851.268243978" watchObservedRunningTime="2025-11-24 13:48:38.927752509 +0000 UTC m=+4851.285266651" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.854863 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-5m8h7"] Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.856779 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.875407 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.876096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcm2g\" (UniqueName: \"kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.978685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcm2g\" (UniqueName: \"kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.978779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:41 crc kubenswrapper[4756]: I1124 13:48:41.978922 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:42 crc kubenswrapper[4756]: I1124 13:48:42.006701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcm2g\" (UniqueName: \"kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g\") pod \"crc-debug-5m8h7\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:42 crc kubenswrapper[4756]: I1124 13:48:42.175460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:48:42 crc kubenswrapper[4756]: W1124 13:48:42.225543 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91deb487_c0ba_4ae0_b027_f2d3cf65fd02.slice/crio-d7276cbd22079c2d8196e00d9ce785a340049cdf55252c7b9e52355a9539afb4 WatchSource:0}: Error finding container d7276cbd22079c2d8196e00d9ce785a340049cdf55252c7b9e52355a9539afb4: Status 404 returned error can't find the container with id d7276cbd22079c2d8196e00d9ce785a340049cdf55252c7b9e52355a9539afb4 Nov 24 13:48:42 crc kubenswrapper[4756]: I1124 13:48:42.936388 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" event={"ID":"91deb487-c0ba-4ae0-b027-f2d3cf65fd02","Type":"ContainerStarted","Data":"d7276cbd22079c2d8196e00d9ce785a340049cdf55252c7b9e52355a9539afb4"} Nov 24 13:48:56 crc kubenswrapper[4756]: I1124 13:48:56.064320 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" event={"ID":"91deb487-c0ba-4ae0-b027-f2d3cf65fd02","Type":"ContainerStarted","Data":"b4e625b914745d96392e2161c1e01dbfdfde60fbf95c98967c489eada4aa6b71"} Nov 24 13:48:56 crc kubenswrapper[4756]: I1124 13:48:56.086963 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" podStartSLOduration=2.122584196 podStartE2EDuration="15.086942503s" podCreationTimestamp="2025-11-24 13:48:41 +0000 UTC" firstStartedPulling="2025-11-24 13:48:42.227894711 +0000 UTC m=+4854.585408853" lastFinishedPulling="2025-11-24 13:48:55.192253018 +0000 UTC m=+4867.549767160" observedRunningTime="2025-11-24 13:48:56.077534912 +0000 UTC m=+4868.435049064" watchObservedRunningTime="2025-11-24 13:48:56.086942503 +0000 UTC m=+4868.444456635" Nov 24 13:49:03 crc kubenswrapper[4756]: I1124 13:49:03.479257 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:49:03 crc kubenswrapper[4756]: I1124 13:49:03.479760 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:49:03 crc kubenswrapper[4756]: I1124 13:49:03.479806 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:49:03 crc kubenswrapper[4756]: I1124 13:49:03.480397 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:49:03 crc kubenswrapper[4756]: I1124 13:49:03.480460 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552" gracePeriod=600 Nov 24 13:49:04 crc kubenswrapper[4756]: I1124 13:49:04.139257 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552" exitCode=0 Nov 24 13:49:04 crc kubenswrapper[4756]: I1124 13:49:04.139356 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552"} Nov 24 13:49:04 crc kubenswrapper[4756]: I1124 13:49:04.139607 4756 scope.go:117] "RemoveContainer" containerID="e5f9f73e1184ae837e6ddba5c3932ee0153df6b2ab19a6c286f018056d050a2a" Nov 24 13:49:07 crc kubenswrapper[4756]: I1124 13:49:07.166427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626"} Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.562222 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.567778 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.576608 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.666809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.666877 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.666898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvz2l\" (UniqueName: \"kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.769276 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.769406 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.769430 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvz2l\" (UniqueName: \"kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.769758 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.769788 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.800263 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvz2l\" (UniqueName: \"kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l\") pod \"redhat-marketplace-pmzzs\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:26 crc kubenswrapper[4756]: I1124 13:49:26.894403 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:27 crc kubenswrapper[4756]: I1124 13:49:27.521140 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:28 crc kubenswrapper[4756]: I1124 13:49:28.343323 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerID="daab870c54eef05ba5273dbba3255221b6695616f5a646331ead48077a41201a" exitCode=0 Nov 24 13:49:28 crc kubenswrapper[4756]: I1124 13:49:28.343403 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerDied","Data":"daab870c54eef05ba5273dbba3255221b6695616f5a646331ead48077a41201a"} Nov 24 13:49:28 crc kubenswrapper[4756]: I1124 13:49:28.343893 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerStarted","Data":"1c38247c8dcfddb486f3661dd179b0de8d30181878480f695b090289010a8c39"} Nov 24 13:49:29 crc kubenswrapper[4756]: I1124 13:49:29.354935 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerStarted","Data":"715340bbe79e96ae9847d313cd9058c7302dff7d8fe78f5427d9c954d395a665"} Nov 24 13:49:30 crc kubenswrapper[4756]: I1124 13:49:30.366576 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerID="715340bbe79e96ae9847d313cd9058c7302dff7d8fe78f5427d9c954d395a665" exitCode=0 Nov 24 13:49:30 crc kubenswrapper[4756]: I1124 13:49:30.366682 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerDied","Data":"715340bbe79e96ae9847d313cd9058c7302dff7d8fe78f5427d9c954d395a665"} Nov 24 13:49:31 crc kubenswrapper[4756]: I1124 13:49:31.376102 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerStarted","Data":"0ce87aec85a8cf37a77a462fe0d903055003d0e07b19e9d20b13725bf741fc7a"} Nov 24 13:49:31 crc kubenswrapper[4756]: I1124 13:49:31.396443 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmzzs" podStartSLOduration=2.962719993 podStartE2EDuration="5.396423591s" podCreationTimestamp="2025-11-24 13:49:26 +0000 UTC" firstStartedPulling="2025-11-24 13:49:28.346600613 +0000 UTC m=+4900.704114755" lastFinishedPulling="2025-11-24 13:49:30.780304211 +0000 UTC m=+4903.137818353" observedRunningTime="2025-11-24 13:49:31.395393674 +0000 UTC m=+4903.752907836" watchObservedRunningTime="2025-11-24 13:49:31.396423591 +0000 UTC m=+4903.753937743" Nov 24 13:49:36 crc kubenswrapper[4756]: I1124 13:49:36.895141 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:36 crc kubenswrapper[4756]: I1124 13:49:36.895774 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:36 crc kubenswrapper[4756]: I1124 13:49:36.952370 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:37 crc kubenswrapper[4756]: I1124 13:49:37.487652 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:37 crc kubenswrapper[4756]: I1124 13:49:37.537083 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:39 crc kubenswrapper[4756]: I1124 13:49:39.463061 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmzzs" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="registry-server" containerID="cri-o://0ce87aec85a8cf37a77a462fe0d903055003d0e07b19e9d20b13725bf741fc7a" gracePeriod=2 Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.476096 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerID="0ce87aec85a8cf37a77a462fe0d903055003d0e07b19e9d20b13725bf741fc7a" exitCode=0 Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.486967 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerDied","Data":"0ce87aec85a8cf37a77a462fe0d903055003d0e07b19e9d20b13725bf741fc7a"} Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.487008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmzzs" event={"ID":"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d","Type":"ContainerDied","Data":"1c38247c8dcfddb486f3661dd179b0de8d30181878480f695b090289010a8c39"} Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.487020 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c38247c8dcfddb486f3661dd179b0de8d30181878480f695b090289010a8c39" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.767078 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.784959 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content\") pod \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.785078 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities\") pod \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.785454 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvz2l\" (UniqueName: \"kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l\") pod \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\" (UID: \"4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d\") " Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.787605 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities" (OuterVolumeSpecName: "utilities") pod "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" (UID: "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.796743 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.808145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" (UID: "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.825053 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l" (OuterVolumeSpecName: "kube-api-access-rvz2l") pod "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" (UID: "4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d"). InnerVolumeSpecName "kube-api-access-rvz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.898034 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvz2l\" (UniqueName: \"kubernetes.io/projected/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-kube-api-access-rvz2l\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:40 crc kubenswrapper[4756]: I1124 13:49:40.898066 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:41 crc kubenswrapper[4756]: I1124 13:49:41.485731 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmzzs" Nov 24 13:49:41 crc kubenswrapper[4756]: I1124 13:49:41.526626 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:41 crc kubenswrapper[4756]: I1124 13:49:41.541002 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmzzs"] Nov 24 13:49:42 crc kubenswrapper[4756]: I1124 13:49:42.495803 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" path="/var/lib/kubelet/pods/4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d/volumes" Nov 24 13:49:46 crc kubenswrapper[4756]: I1124 13:49:46.538572 4756 generic.go:334] "Generic (PLEG): container finished" podID="91deb487-c0ba-4ae0-b027-f2d3cf65fd02" containerID="b4e625b914745d96392e2161c1e01dbfdfde60fbf95c98967c489eada4aa6b71" exitCode=0 Nov 24 13:49:46 crc kubenswrapper[4756]: I1124 13:49:46.538674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" event={"ID":"91deb487-c0ba-4ae0-b027-f2d3cf65fd02","Type":"ContainerDied","Data":"b4e625b914745d96392e2161c1e01dbfdfde60fbf95c98967c489eada4aa6b71"} Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.679293 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.719651 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-5m8h7"] Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.732860 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-5m8h7"] Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.733218 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcm2g\" (UniqueName: \"kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g\") pod \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.733501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host\") pod \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\" (UID: \"91deb487-c0ba-4ae0-b027-f2d3cf65fd02\") " Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.733560 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host" (OuterVolumeSpecName: "host") pod "91deb487-c0ba-4ae0-b027-f2d3cf65fd02" (UID: "91deb487-c0ba-4ae0-b027-f2d3cf65fd02"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.734014 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.739454 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g" (OuterVolumeSpecName: "kube-api-access-mcm2g") pod "91deb487-c0ba-4ae0-b027-f2d3cf65fd02" (UID: "91deb487-c0ba-4ae0-b027-f2d3cf65fd02"). InnerVolumeSpecName "kube-api-access-mcm2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:49:47 crc kubenswrapper[4756]: I1124 13:49:47.836717 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcm2g\" (UniqueName: \"kubernetes.io/projected/91deb487-c0ba-4ae0-b027-f2d3cf65fd02-kube-api-access-mcm2g\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.490534 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91deb487-c0ba-4ae0-b027-f2d3cf65fd02" path="/var/lib/kubelet/pods/91deb487-c0ba-4ae0-b027-f2d3cf65fd02/volumes" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.565355 4756 scope.go:117] "RemoveContainer" containerID="b4e625b914745d96392e2161c1e01dbfdfde60fbf95c98967c489eada4aa6b71" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.565402 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-5m8h7" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.902241 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-c9876"] Nov 24 13:49:48 crc kubenswrapper[4756]: E1124 13:49:48.903266 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91deb487-c0ba-4ae0-b027-f2d3cf65fd02" containerName="container-00" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903289 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="91deb487-c0ba-4ae0-b027-f2d3cf65fd02" containerName="container-00" Nov 24 13:49:48 crc kubenswrapper[4756]: E1124 13:49:48.903351 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="extract-content" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903366 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="extract-content" Nov 24 13:49:48 crc kubenswrapper[4756]: E1124 13:49:48.903388 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="registry-server" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903401 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="registry-server" Nov 24 13:49:48 crc kubenswrapper[4756]: E1124 13:49:48.903429 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="extract-utilities" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903442 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="extract-utilities" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903764 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea5f0b8-d65b-4e44-9fa8-078ff696ca0d" containerName="registry-server" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.903812 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="91deb487-c0ba-4ae0-b027-f2d3cf65fd02" containerName="container-00" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.904914 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.965743 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:48 crc kubenswrapper[4756]: I1124 13:49:48.966132 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcc2x\" (UniqueName: \"kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.067807 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcc2x\" (UniqueName: \"kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.067963 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.068248 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.086903 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcc2x\" (UniqueName: \"kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x\") pod \"crc-debug-c9876\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.229390 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.577730 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-c9876" event={"ID":"41fb1c69-73d6-4681-8ead-585525a1c180","Type":"ContainerStarted","Data":"f39aac2506a5ef249e5bf931be5f996b513cf5f7705e3de7c32f7081d8d6fcea"} Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.578069 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-c9876" event={"ID":"41fb1c69-73d6-4681-8ead-585525a1c180","Type":"ContainerStarted","Data":"239d3fb3f596f2646e3e32569fca94c48129eafd7ea849a0a8a5f83dd137dcd6"} Nov 24 13:49:49 crc kubenswrapper[4756]: I1124 13:49:49.592498 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vqv5d/crc-debug-c9876" podStartSLOduration=1.5924787440000001 podStartE2EDuration="1.592478744s" podCreationTimestamp="2025-11-24 13:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:49:49.588759055 +0000 UTC m=+4921.946273207" watchObservedRunningTime="2025-11-24 13:49:49.592478744 +0000 UTC m=+4921.949992896" Nov 24 13:49:50 crc kubenswrapper[4756]: I1124 13:49:50.587329 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-c9876" event={"ID":"41fb1c69-73d6-4681-8ead-585525a1c180","Type":"ContainerDied","Data":"f39aac2506a5ef249e5bf931be5f996b513cf5f7705e3de7c32f7081d8d6fcea"} Nov 24 13:49:50 crc kubenswrapper[4756]: I1124 13:49:50.587388 4756 generic.go:334] "Generic (PLEG): container finished" podID="41fb1c69-73d6-4681-8ead-585525a1c180" containerID="f39aac2506a5ef249e5bf931be5f996b513cf5f7705e3de7c32f7081d8d6fcea" exitCode=0 Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.692374 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.843958 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcc2x\" (UniqueName: \"kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x\") pod \"41fb1c69-73d6-4681-8ead-585525a1c180\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.844329 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host\") pod \"41fb1c69-73d6-4681-8ead-585525a1c180\" (UID: \"41fb1c69-73d6-4681-8ead-585525a1c180\") " Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.844576 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host" (OuterVolumeSpecName: "host") pod "41fb1c69-73d6-4681-8ead-585525a1c180" (UID: "41fb1c69-73d6-4681-8ead-585525a1c180"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.845489 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41fb1c69-73d6-4681-8ead-585525a1c180-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.849930 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x" (OuterVolumeSpecName: "kube-api-access-mcc2x") pod "41fb1c69-73d6-4681-8ead-585525a1c180" (UID: "41fb1c69-73d6-4681-8ead-585525a1c180"). InnerVolumeSpecName "kube-api-access-mcc2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:49:51 crc kubenswrapper[4756]: I1124 13:49:51.951634 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcc2x\" (UniqueName: \"kubernetes.io/projected/41fb1c69-73d6-4681-8ead-585525a1c180-kube-api-access-mcc2x\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:52 crc kubenswrapper[4756]: I1124 13:49:52.242551 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-c9876"] Nov 24 13:49:52 crc kubenswrapper[4756]: I1124 13:49:52.251969 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-c9876"] Nov 24 13:49:52 crc kubenswrapper[4756]: I1124 13:49:52.488092 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41fb1c69-73d6-4681-8ead-585525a1c180" path="/var/lib/kubelet/pods/41fb1c69-73d6-4681-8ead-585525a1c180/volumes" Nov 24 13:49:52 crc kubenswrapper[4756]: I1124 13:49:52.609378 4756 scope.go:117] "RemoveContainer" containerID="f39aac2506a5ef249e5bf931be5f996b513cf5f7705e3de7c32f7081d8d6fcea" Nov 24 13:49:52 crc kubenswrapper[4756]: I1124 13:49:52.609434 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-c9876" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.050805 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-n4j6h"] Nov 24 13:49:54 crc kubenswrapper[4756]: E1124 13:49:54.051762 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fb1c69-73d6-4681-8ead-585525a1c180" containerName="container-00" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.051779 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fb1c69-73d6-4681-8ead-585525a1c180" containerName="container-00" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.052035 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fb1c69-73d6-4681-8ead-585525a1c180" containerName="container-00" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.052918 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.095470 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.095861 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmqnk\" (UniqueName: \"kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.197728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.197893 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.197918 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmqnk\" (UniqueName: \"kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.403018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmqnk\" (UniqueName: \"kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk\") pod \"crc-debug-n4j6h\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:54 crc kubenswrapper[4756]: I1124 13:49:54.675259 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:55 crc kubenswrapper[4756]: I1124 13:49:55.641764 4756 generic.go:334] "Generic (PLEG): container finished" podID="848b2306-b569-4c63-a75c-6ea9abfad618" containerID="9b2ebb50c014aa63d26fffa218a278a88d4f1bf1369ddc6ea1c5b487622b208c" exitCode=0 Nov 24 13:49:55 crc kubenswrapper[4756]: I1124 13:49:55.641858 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" event={"ID":"848b2306-b569-4c63-a75c-6ea9abfad618","Type":"ContainerDied","Data":"9b2ebb50c014aa63d26fffa218a278a88d4f1bf1369ddc6ea1c5b487622b208c"} Nov 24 13:49:55 crc kubenswrapper[4756]: I1124 13:49:55.642286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" event={"ID":"848b2306-b569-4c63-a75c-6ea9abfad618","Type":"ContainerStarted","Data":"3538d6aa5927e76d8a1664bdb8f8fedb25ac449ae0b2dac8d70697a48e55ed92"} Nov 24 13:49:55 crc kubenswrapper[4756]: I1124 13:49:55.686213 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-n4j6h"] Nov 24 13:49:55 crc kubenswrapper[4756]: I1124 13:49:55.696989 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vqv5d/crc-debug-n4j6h"] Nov 24 13:49:56 crc kubenswrapper[4756]: I1124 13:49:56.769387 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:56 crc kubenswrapper[4756]: I1124 13:49:56.945151 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmqnk\" (UniqueName: \"kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk\") pod \"848b2306-b569-4c63-a75c-6ea9abfad618\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " Nov 24 13:49:56 crc kubenswrapper[4756]: I1124 13:49:56.945265 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host\") pod \"848b2306-b569-4c63-a75c-6ea9abfad618\" (UID: \"848b2306-b569-4c63-a75c-6ea9abfad618\") " Nov 24 13:49:56 crc kubenswrapper[4756]: I1124 13:49:56.945620 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host" (OuterVolumeSpecName: "host") pod "848b2306-b569-4c63-a75c-6ea9abfad618" (UID: "848b2306-b569-4c63-a75c-6ea9abfad618"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:49:56 crc kubenswrapper[4756]: I1124 13:49:56.951521 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk" (OuterVolumeSpecName: "kube-api-access-mmqnk") pod "848b2306-b569-4c63-a75c-6ea9abfad618" (UID: "848b2306-b569-4c63-a75c-6ea9abfad618"). InnerVolumeSpecName "kube-api-access-mmqnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:49:57 crc kubenswrapper[4756]: I1124 13:49:57.047049 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmqnk\" (UniqueName: \"kubernetes.io/projected/848b2306-b569-4c63-a75c-6ea9abfad618-kube-api-access-mmqnk\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:57 crc kubenswrapper[4756]: I1124 13:49:57.047088 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/848b2306-b569-4c63-a75c-6ea9abfad618-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:49:57 crc kubenswrapper[4756]: I1124 13:49:57.661556 4756 scope.go:117] "RemoveContainer" containerID="9b2ebb50c014aa63d26fffa218a278a88d4f1bf1369ddc6ea1c5b487622b208c" Nov 24 13:49:57 crc kubenswrapper[4756]: I1124 13:49:57.661606 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/crc-debug-n4j6h" Nov 24 13:49:58 crc kubenswrapper[4756]: I1124 13:49:58.488475 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848b2306-b569-4c63-a75c-6ea9abfad618" path="/var/lib/kubelet/pods/848b2306-b569-4c63-a75c-6ea9abfad618/volumes" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.738572 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:16 crc kubenswrapper[4756]: E1124 13:50:16.739489 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848b2306-b569-4c63-a75c-6ea9abfad618" containerName="container-00" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.739501 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="848b2306-b569-4c63-a75c-6ea9abfad618" containerName="container-00" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.739698 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="848b2306-b569-4c63-a75c-6ea9abfad618" containerName="container-00" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.741563 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.754191 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.821673 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx24g\" (UniqueName: \"kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.821787 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.821868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.924220 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.924626 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx24g\" (UniqueName: \"kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.924772 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.924866 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.925091 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:16 crc kubenswrapper[4756]: I1124 13:50:16.943645 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx24g\" (UniqueName: \"kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g\") pod \"certified-operators-8vm5q\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:17 crc kubenswrapper[4756]: I1124 13:50:17.124026 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:17 crc kubenswrapper[4756]: I1124 13:50:17.640750 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:17 crc kubenswrapper[4756]: I1124 13:50:17.876583 4756 generic.go:334] "Generic (PLEG): container finished" podID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerID="7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041" exitCode=0 Nov 24 13:50:17 crc kubenswrapper[4756]: I1124 13:50:17.876704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerDied","Data":"7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041"} Nov 24 13:50:17 crc kubenswrapper[4756]: I1124 13:50:17.876923 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerStarted","Data":"bee85e35c2cc16e8bbebd781cd1a01d17bdb477dab264d9ef9d5b8cd84061380"} Nov 24 13:50:18 crc kubenswrapper[4756]: I1124 13:50:18.891017 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerStarted","Data":"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79"} Nov 24 13:50:19 crc kubenswrapper[4756]: I1124 13:50:19.901281 4756 generic.go:334] "Generic (PLEG): container finished" podID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerID="d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79" exitCode=0 Nov 24 13:50:19 crc kubenswrapper[4756]: I1124 13:50:19.901504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerDied","Data":"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79"} Nov 24 13:50:20 crc kubenswrapper[4756]: I1124 13:50:20.916699 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerStarted","Data":"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869"} Nov 24 13:50:20 crc kubenswrapper[4756]: I1124 13:50:20.939121 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vm5q" podStartSLOduration=2.497186834 podStartE2EDuration="4.939063661s" podCreationTimestamp="2025-11-24 13:50:16 +0000 UTC" firstStartedPulling="2025-11-24 13:50:17.879044261 +0000 UTC m=+4950.236558403" lastFinishedPulling="2025-11-24 13:50:20.320921068 +0000 UTC m=+4952.678435230" observedRunningTime="2025-11-24 13:50:20.936085311 +0000 UTC m=+4953.293599493" watchObservedRunningTime="2025-11-24 13:50:20.939063661 +0000 UTC m=+4953.296577803" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.451538 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ddf8b8dd6-scmvb_767d77ce-06bb-44dc-b47f-229303527133/barbican-api/0.log" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.578568 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ddf8b8dd6-scmvb_767d77ce-06bb-44dc-b47f-229303527133/barbican-api-log/0.log" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.629632 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77956fdfb6-wlggx_fb164396-9603-40ac-a47b-5b8feb1be35c/barbican-keystone-listener/0.log" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.747969 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77956fdfb6-wlggx_fb164396-9603-40ac-a47b-5b8feb1be35c/barbican-keystone-listener-log/0.log" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.864899 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-568b98fff9-ngjr7_2ab695f1-c645-42dc-be38-2935fbe4977d/barbican-worker/0.log" Nov 24 13:50:22 crc kubenswrapper[4756]: I1124 13:50:22.912688 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-568b98fff9-ngjr7_2ab695f1-c645-42dc-be38-2935fbe4977d/barbican-worker-log/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.119512 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc_4e481796-37f1-413f-8274-2d32d2f3ef5c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.182225 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/ceilometer-central-agent/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.220000 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/ceilometer-notification-agent/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.337260 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/proxy-httpd/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.387875 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/sg-core/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.483881 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348/cinder-api/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.532086 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348/cinder-api-log/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.730641 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_202e89fe-1aa2-462b-b3cf-2d71151c8de9/cinder-scheduler/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.740946 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_202e89fe-1aa2-462b-b3cf-2d71151c8de9/probe/0.log" Nov 24 13:50:23 crc kubenswrapper[4756]: I1124 13:50:23.863984 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg_529df660-5b77-4ba7-b190-02acb8a8de9c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.032314 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj_2750f3ce-2cc3-41e8-a2b5-7a96e17c9842/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.241316 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/init/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.436038 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/init/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.511766 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh_26be1a13-f657-4240-ba64-a260d9a6355a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.583283 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/dnsmasq-dns/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.767592 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_505721db-c67e-42b6-b508-11cd950bc272/glance-httpd/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.802952 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_505721db-c67e-42b6-b508-11cd950bc272/glance-log/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.939436 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3add353c-985b-4ed2-9bcf-a64e03c5479a/glance-httpd/0.log" Nov 24 13:50:24 crc kubenswrapper[4756]: I1124 13:50:24.953105 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3add353c-985b-4ed2-9bcf-a64e03c5479a/glance-log/0.log" Nov 24 13:50:25 crc kubenswrapper[4756]: I1124 13:50:25.127710 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-585c6478b8-gsbzg_6ae02ece-f457-4943-92fe-9569b5083f41/horizon/0.log" Nov 24 13:50:25 crc kubenswrapper[4756]: I1124 13:50:25.357855 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd_07b45682-fb34-44c2-8fa1-fcf25559773e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:25 crc kubenswrapper[4756]: I1124 13:50:25.521537 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z5l4l_6d74c7df-a689-4879-9319-a808a2f726cb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:25 crc kubenswrapper[4756]: I1124 13:50:25.673410 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-585c6478b8-gsbzg_6ae02ece-f457-4943-92fe-9569b5083f41/horizon-log/0.log" Nov 24 13:50:25 crc kubenswrapper[4756]: I1124 13:50:25.916110 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399821-rlf75_c6cdb570-24c3-419d-b75b-7bd66ec283a3/keystone-cron/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.004468 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5995df89cc-sxcgq_77785a15-6850-4685-8fbd-b129153baa32/keystone-api/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.041458 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d6fdbd64-1ed5-48d3-a245-a13416afe4d9/kube-state-metrics/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.235364 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-cnghh_9956cbef-8286-4c85-9c91-4e476d82d3d9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.732405 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4ddfdbf7-zd9s5_d2f5b7c5-30dd-4145-a18a-fe929e4d660a/neutron-httpd/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.782323 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6_7cf15818-9c96-4bbe-bb89-6d26aff5bfbe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:26 crc kubenswrapper[4756]: I1124 13:50:26.806645 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4ddfdbf7-zd9s5_d2f5b7c5-30dd-4145-a18a-fe929e4d660a/neutron-api/0.log" Nov 24 13:50:27 crc kubenswrapper[4756]: I1124 13:50:27.125049 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:27 crc kubenswrapper[4756]: I1124 13:50:27.125378 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:27 crc kubenswrapper[4756]: I1124 13:50:27.184756 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:27 crc kubenswrapper[4756]: I1124 13:50:27.427572 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3/nova-cell0-conductor-conductor/0.log" Nov 24 13:50:27 crc kubenswrapper[4756]: I1124 13:50:27.960916 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_94da8bb7-f5c5-4411-be93-40a15bb4c121/nova-cell1-conductor-conductor/0.log" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.042151 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.099856 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.218147 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_54bdc4b7-e42a-49b9-b81e-d817f3c08555/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.314271 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c8190147-b7ca-47e1-86f0-54dad2dbc996/nova-api-log/0.log" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.513542 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mljzt_0cc3b9cc-6392-479d-bb83-af7c5fe6d79d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.545889 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c8190147-b7ca-47e1-86f0-54dad2dbc996/nova-api-api/0.log" Nov 24 13:50:28 crc kubenswrapper[4756]: I1124 13:50:28.618488 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbc35708-5fe2-4f73-b7f0-958f40e12f63/nova-metadata-log/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.017724 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/mysql-bootstrap/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.136907 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_18d356c0-8e84-4ec3-b61c-bef4f3906505/nova-scheduler-scheduler/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.229142 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/mysql-bootstrap/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.296629 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/galera/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.481077 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/mysql-bootstrap/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.668030 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/mysql-bootstrap/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.745477 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/galera/0.log" Nov 24 13:50:29 crc kubenswrapper[4756]: I1124 13:50:29.876945 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_08b95fcc-45c6-4618-bdad-3fb8c095e753/openstackclient/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.000291 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vm5q" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="registry-server" containerID="cri-o://e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869" gracePeriod=2 Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.018837 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2lk9k_f9af141a-c02a-4457-b68e-111765a62280/ovn-controller/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.211431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jldfh_fb0e2af7-5d32-48ae-9f03-91233a28ed8e/openstack-network-exporter/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.381625 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server-init/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.504558 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.597760 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx24g\" (UniqueName: \"kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g\") pod \"f8ee137e-159d-442a-86e8-87bb2b6181b9\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.597884 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities\") pod \"f8ee137e-159d-442a-86e8-87bb2b6181b9\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.597962 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content\") pod \"f8ee137e-159d-442a-86e8-87bb2b6181b9\" (UID: \"f8ee137e-159d-442a-86e8-87bb2b6181b9\") " Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.598820 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities" (OuterVolumeSpecName: "utilities") pod "f8ee137e-159d-442a-86e8-87bb2b6181b9" (UID: "f8ee137e-159d-442a-86e8-87bb2b6181b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.607316 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g" (OuterVolumeSpecName: "kube-api-access-gx24g") pod "f8ee137e-159d-442a-86e8-87bb2b6181b9" (UID: "f8ee137e-159d-442a-86e8-87bb2b6181b9"). InnerVolumeSpecName "kube-api-access-gx24g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.657604 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8ee137e-159d-442a-86e8-87bb2b6181b9" (UID: "f8ee137e-159d-442a-86e8-87bb2b6181b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.686839 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovs-vswitchd/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.700086 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx24g\" (UniqueName: \"kubernetes.io/projected/f8ee137e-159d-442a-86e8-87bb2b6181b9-kube-api-access-gx24g\") on node \"crc\" DevicePath \"\"" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.700118 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.700127 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ee137e-159d-442a-86e8-87bb2b6181b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.705158 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbc35708-5fe2-4f73-b7f0-958f40e12f63/nova-metadata-metadata/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.721654 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.722394 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server-init/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.968099 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6f03b394-8de8-41e4-9cbe-a09bc8e922ad/openstack-network-exporter/0.log" Nov 24 13:50:30 crc kubenswrapper[4756]: I1124 13:50:30.978926 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mp7tr_e0a0f4dd-db57-4645-9b06-51c0416636f4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.010098 4756 generic.go:334] "Generic (PLEG): container finished" podID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerID="e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869" exitCode=0 Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.010149 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerDied","Data":"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869"} Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.010203 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vm5q" event={"ID":"f8ee137e-159d-442a-86e8-87bb2b6181b9","Type":"ContainerDied","Data":"bee85e35c2cc16e8bbebd781cd1a01d17bdb477dab264d9ef9d5b8cd84061380"} Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.010224 4756 scope.go:117] "RemoveContainer" containerID="e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.010577 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vm5q" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.028780 4756 scope.go:117] "RemoveContainer" containerID="d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.078631 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.087501 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vm5q"] Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.096969 4756 scope.go:117] "RemoveContainer" containerID="7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.128146 4756 scope.go:117] "RemoveContainer" containerID="e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869" Nov 24 13:50:31 crc kubenswrapper[4756]: E1124 13:50:31.130334 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869\": container with ID starting with e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869 not found: ID does not exist" containerID="e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.130375 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869"} err="failed to get container status \"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869\": rpc error: code = NotFound desc = could not find container \"e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869\": container with ID starting with e4547e68cbe048da4ea2ebdd3dad0215f44974730143c6cfb50f9431b2dfd869 not found: ID does not exist" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.130399 4756 scope.go:117] "RemoveContainer" containerID="d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79" Nov 24 13:50:31 crc kubenswrapper[4756]: E1124 13:50:31.130723 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79\": container with ID starting with d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79 not found: ID does not exist" containerID="d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.130742 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79"} err="failed to get container status \"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79\": rpc error: code = NotFound desc = could not find container \"d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79\": container with ID starting with d97295bd6422f4a189dbd9b7b42aa3b3f1fd2ae208377116af785f157c59de79 not found: ID does not exist" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.130754 4756 scope.go:117] "RemoveContainer" containerID="7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041" Nov 24 13:50:31 crc kubenswrapper[4756]: E1124 13:50:31.131087 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041\": container with ID starting with 7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041 not found: ID does not exist" containerID="7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.131252 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041"} err="failed to get container status \"7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041\": rpc error: code = NotFound desc = could not find container \"7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041\": container with ID starting with 7cdc3d04a6dece53631ebcb6b27696efcef58991793d0f71b188984be1125041 not found: ID does not exist" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.176586 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6f03b394-8de8-41e4-9cbe-a09bc8e922ad/ovn-northd/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.189229 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f016c6c2-d6cf-42ff-a700-314a97bb1bcc/openstack-network-exporter/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.306864 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f016c6c2-d6cf-42ff-a700-314a97bb1bcc/ovsdbserver-nb/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.452509 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_57506583-001b-4baf-b8b1-6cd4fc282472/openstack-network-exporter/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.492152 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_57506583-001b-4baf-b8b1-6cd4fc282472/ovsdbserver-sb/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.806721 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8c9745c6-b4wdj_b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7/placement-api/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.820904 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/init-config-reloader/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.940734 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8c9745c6-b4wdj_b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7/placement-log/0.log" Nov 24 13:50:31 crc kubenswrapper[4756]: I1124 13:50:31.982819 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/init-config-reloader/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.010600 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/config-reloader/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.059764 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/prometheus/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.192721 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/thanos-sidecar/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.252132 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/setup-container/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.485692 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" path="/var/lib/kubelet/pods/f8ee137e-159d-442a-86e8-87bb2b6181b9/volumes" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.490944 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/setup-container/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.513151 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/rabbitmq/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.571268 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/setup-container/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.784056 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/setup-container/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.793020 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/rabbitmq/0.log" Nov 24 13:50:32 crc kubenswrapper[4756]: I1124 13:50:32.806462 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6_815b1dea-8fed-47a0-bb79-5eb5bb428c34/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.026641 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm_1f6dbb8f-7ae0-4132-ac08-12d04b55bb90/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.037050 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-42v8v_2b3ef56e-99e5-44c6-8a14-b49385bf3144/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.336232 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-c6sz4_8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1/ssh-known-hosts-edpm-deployment/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.348942 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-564fv_16cccf4b-aeec-4529-9f5f-547e0df302e1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.576287 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7646996fbc-r65ms_cfa5b0a5-395b-463e-aeb1-21b5cca10b22/proxy-server/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.688048 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7646996fbc-r65ms_cfa5b0a5-395b-463e-aeb1-21b5cca10b22/proxy-httpd/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.726384 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4x5lq_6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07/swift-ring-rebalance/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.846780 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-auditor/0.log" Nov 24 13:50:33 crc kubenswrapper[4756]: I1124 13:50:33.879505 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-reaper/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.059218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-server/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.094607 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-auditor/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.098108 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-replicator/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.126563 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-replicator/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.259308 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-server/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.273450 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-updater/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.297286 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-auditor/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.341446 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-expirer/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.483303 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-replicator/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.530019 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-server/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.584895 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/rsync/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.603700 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-updater/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.700917 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/swift-recon-cron/0.log" Nov 24 13:50:34 crc kubenswrapper[4756]: I1124 13:50:34.791772 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw_24a1eee7-f667-475d-9b05-0b9f49a5619c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:35 crc kubenswrapper[4756]: I1124 13:50:35.003125 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_11285260-11ac-42da-b521-1be38199040e/test-operator-logs-container/0.log" Nov 24 13:50:35 crc kubenswrapper[4756]: I1124 13:50:35.012444 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_931a5dda-ad1f-4595-a5b8-3b1820afb648/tempest-tests-tempest-tests-runner/0.log" Nov 24 13:50:35 crc kubenswrapper[4756]: I1124 13:50:35.191931 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4_2112cb18-ecf9-43ff-b22c-37044d2b64e2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:50:35 crc kubenswrapper[4756]: I1124 13:50:35.786218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d/watcher-applier/0.log" Nov 24 13:50:36 crc kubenswrapper[4756]: I1124 13:50:36.055603 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c84e5d83-17b7-47c7-9952-9e6942940b2a/watcher-api-log/0.log" Nov 24 13:50:36 crc kubenswrapper[4756]: I1124 13:50:36.070970 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_be09de3c-c143-4b11-98ca-45292b9b015c/memcached/0.log" Nov 24 13:50:37 crc kubenswrapper[4756]: I1124 13:50:37.352014 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_6eff196d-2bdb-48c0-9c64-f8f0836f5450/watcher-decision-engine/0.log" Nov 24 13:50:38 crc kubenswrapper[4756]: I1124 13:50:38.421334 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c84e5d83-17b7-47c7-9952-9e6942940b2a/watcher-api/0.log" Nov 24 13:51:00 crc kubenswrapper[4756]: I1124 13:51:00.823732 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.021086 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.024548 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.031531 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.207108 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.240721 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/extract/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.247647 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.418738 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-s8jfx_81563dca-2369-4349-9881-b2031df19de0/kube-rbac-proxy/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.479404 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-5srrx_0aa7a2bc-482f-4ed4-820d-331ea6d971c7/kube-rbac-proxy/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.499502 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-s8jfx_81563dca-2369-4349-9881-b2031df19de0/manager/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.656571 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-5srrx_0aa7a2bc-482f-4ed4-820d-331ea6d971c7/manager/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.699360 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-djhp4_3bc7fab7-280b-4964-a1f0-51f0b59438ed/kube-rbac-proxy/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.730029 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-djhp4_3bc7fab7-280b-4964-a1f0-51f0b59438ed/manager/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.861905 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-w22c4_91ae544e-de6e-44e8-9119-eae33586fe56/kube-rbac-proxy/0.log" Nov 24 13:51:01 crc kubenswrapper[4756]: I1124 13:51:01.967481 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-w22c4_91ae544e-de6e-44e8-9119-eae33586fe56/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.017254 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-p9zgm_e2224700-f8c7-4380-95c5-537e168c7e99/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.076449 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-p9zgm_e2224700-f8c7-4380-95c5-537e168c7e99/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.155052 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-sh48c_99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.213798 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-sh48c_99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.313586 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-hbs6w_8d4269ad-a2ff-47be-bade-792bbf616cf2/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.429407 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kqtc5_4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.482863 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-hbs6w_8d4269ad-a2ff-47be-bade-792bbf616cf2/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.539011 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kqtc5_4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.649672 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-s2rwz_09cf908e-b30f-47ea-a4d1-2e50a192289f/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.732013 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-s2rwz_09cf908e-b30f-47ea-a4d1-2e50a192289f/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.822183 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9q9zc_991621b1-366e-4d35-b1b7-6380e506ea08/manager/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.827341 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9q9zc_991621b1-366e-4d35-b1b7-6380e506ea08/kube-rbac-proxy/0.log" Nov 24 13:51:02 crc kubenswrapper[4756]: I1124 13:51:02.956624 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_274dfe9d-6821-481f-a605-bf8fbf101f89/kube-rbac-proxy/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.021497 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_274dfe9d-6821-481f-a605-bf8fbf101f89/manager/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.157988 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-vwvff_8465956b-6245-447e-adcd-7ba8367ca117/kube-rbac-proxy/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.229691 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-vwvff_8465956b-6245-447e-adcd-7ba8367ca117/manager/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.232687 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-xjntp_bfc7de9e-743d-4492-979c-7043fb8b41d1/kube-rbac-proxy/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.421712 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-m4r6n_f874e9c8-d248-46c4-a1f2-8912827db14f/manager/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.423648 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-xjntp_bfc7de9e-743d-4492-979c-7043fb8b41d1/manager/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.438337 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-m4r6n_f874e9c8-d248-46c4-a1f2-8912827db14f/kube-rbac-proxy/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.611323 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb_94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f/kube-rbac-proxy/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.621494 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb_94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f/manager/0.log" Nov 24 13:51:03 crc kubenswrapper[4756]: I1124 13:51:03.949614 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-666974d685-8lth8_0e481139-e850-4597-b98b-0a2aa8b1add9/operator/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.073758 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7jgvj_bc62b86b-8abe-4660-b5dd-e80f36962d0a/registry-server/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.159938 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-l9fh4_46c6804b-e74a-42d0-bc4e-2ffa7a5fa491/kube-rbac-proxy/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.320765 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-l9fh4_46c6804b-e74a-42d0-bc4e-2ffa7a5fa491/manager/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.422484 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dkjtg_248f663c-2ddc-487f-a33c-9d7b9bad23be/kube-rbac-proxy/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.610030 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kvg52_0624f295-ba46-4e28-9f0f-356cfbe6ecbc/operator/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.610966 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dkjtg_248f663c-2ddc-487f-a33c-9d7b9bad23be/manager/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.745339 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rb4l5_4c0ece30-ae1b-4706-861c-2ee51f7332d7/kube-rbac-proxy/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.899306 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rb4l5_4c0ece30-ae1b-4706-861c-2ee51f7332d7/manager/0.log" Nov 24 13:51:04 crc kubenswrapper[4756]: I1124 13:51:04.925444 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-spm8x_7480249a-d35a-4768-b5cc-daebd6f82c9b/kube-rbac-proxy/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.161059 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-5mpgm_2f99acab-4016-43dc-ab21-6d0c920def14/kube-rbac-proxy/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.233988 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-5mpgm_2f99acab-4016-43dc-ab21-6d0c920def14/manager/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.255860 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-spm8x_7480249a-d35a-4768-b5cc-daebd6f82c9b/manager/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.323249 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57bd844978-vgphd_fc8be713-d12e-4289-adbd-a3aee9ebf603/manager/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.411921 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7445f8dd59-46b2l_eb1f334a-14e2-4f63-8168-e5db902d8e70/kube-rbac-proxy/0.log" Nov 24 13:51:05 crc kubenswrapper[4756]: I1124 13:51:05.468038 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7445f8dd59-46b2l_eb1f334a-14e2-4f63-8168-e5db902d8e70/manager/0.log" Nov 24 13:51:23 crc kubenswrapper[4756]: I1124 13:51:23.360345 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-v5ndr_37bf3224-33a8-45ab-93fc-05a44ed3f535/control-plane-machine-set-operator/0.log" Nov 24 13:51:23 crc kubenswrapper[4756]: I1124 13:51:23.562455 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xhcw8_a0672b72-0b66-434e-8930-4297ea0f3f98/kube-rbac-proxy/0.log" Nov 24 13:51:23 crc kubenswrapper[4756]: I1124 13:51:23.581761 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xhcw8_a0672b72-0b66-434e-8930-4297ea0f3f98/machine-api-operator/0.log" Nov 24 13:51:33 crc kubenswrapper[4756]: I1124 13:51:33.479606 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:51:33 crc kubenswrapper[4756]: I1124 13:51:33.480194 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:51:36 crc kubenswrapper[4756]: I1124 13:51:36.729674 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-v72fq_50967785-12d1-45d3-b9e1-03c7dcb00af4/cert-manager-controller/0.log" Nov 24 13:51:36 crc kubenswrapper[4756]: I1124 13:51:36.804894 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-4dcdb_029ff7e6-28c1-4bf7-9b5b-575230d5ed04/cert-manager-cainjector/0.log" Nov 24 13:51:36 crc kubenswrapper[4756]: I1124 13:51:36.853175 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-4xzdf_f2d8ba73-901c-4245-bb1f-37c63a3b7232/cert-manager-webhook/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.574893 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-xbnb6_c0c918b6-55ce-4aa8-b777-1b442a5c0ea9/nmstate-console-plugin/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.729284 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hg5kh_bb1daace-bab0-41df-a60c-cc01cd7013ea/nmstate-handler/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.801342 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4c8wf_6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1/nmstate-metrics/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.803833 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4c8wf_6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1/kube-rbac-proxy/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.979045 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-9gdrp_7d382e59-3e6d-496e-b637-3ef4848ddc24/nmstate-operator/0.log" Nov 24 13:51:50 crc kubenswrapper[4756]: I1124 13:51:50.990646 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-p6r5k_28634c25-efc4-43b6-92c5-0bc6b20aa941/nmstate-webhook/0.log" Nov 24 13:52:03 crc kubenswrapper[4756]: I1124 13:52:03.479249 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:52:03 crc kubenswrapper[4756]: I1124 13:52:03.481304 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.145617 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5k8pq_e090cbac-2c8e-44a1-9df3-592d95aa0e66/kube-rbac-proxy/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.302781 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5k8pq_e090cbac-2c8e-44a1-9df3-592d95aa0e66/controller/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.348109 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.481290 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.481904 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.530780 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.550634 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.670695 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.728510 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.734942 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.735506 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.912716 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.913642 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.926674 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/controller/0.log" Nov 24 13:52:05 crc kubenswrapper[4756]: I1124 13:52:05.931183 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.102944 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/frr-metrics/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.128059 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/kube-rbac-proxy/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.170398 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/kube-rbac-proxy-frr/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.415668 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/reloader/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.476352 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-2db82_7b16b70e-daf1-4950-994b-b0e166b95215/frr-k8s-webhook-server/0.log" Nov 24 13:52:06 crc kubenswrapper[4756]: I1124 13:52:06.841052 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-585778954f-lwtdb_fe690ebd-7c38-400c-bd3e-ddec63e361ea/manager/0.log" Nov 24 13:52:07 crc kubenswrapper[4756]: I1124 13:52:07.100408 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bbc7fc897-wd2m8_df074e39-b784-4804-afd8-3625ad3fecd0/webhook-server/0.log" Nov 24 13:52:07 crc kubenswrapper[4756]: I1124 13:52:07.202136 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/frr/0.log" Nov 24 13:52:07 crc kubenswrapper[4756]: I1124 13:52:07.266979 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r8zzq_54a65742-0318-409a-8a0e-e5c01abe2945/kube-rbac-proxy/0.log" Nov 24 13:52:07 crc kubenswrapper[4756]: I1124 13:52:07.615407 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r8zzq_54a65742-0318-409a-8a0e-e5c01abe2945/speaker/0.log" Nov 24 13:52:21 crc kubenswrapper[4756]: I1124 13:52:21.955741 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.172284 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.197506 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.233239 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.855082 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.859655 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 13:52:22 crc kubenswrapper[4756]: I1124 13:52:22.904704 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/extract/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.043195 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.232707 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.237857 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.249405 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.375631 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.415092 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.476779 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/extract/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.544329 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.749646 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.756705 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 13:52:23 crc kubenswrapper[4756]: I1124 13:52:23.781821 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 13:52:24 crc kubenswrapper[4756]: I1124 13:52:24.524531 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 13:52:24 crc kubenswrapper[4756]: I1124 13:52:24.658447 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 13:52:24 crc kubenswrapper[4756]: I1124 13:52:24.937801 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.165348 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.212960 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.266773 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.389626 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/registry-server/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.425695 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.434523 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.660492 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.765924 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/registry-server/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.871088 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.887253 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 13:52:25 crc kubenswrapper[4756]: I1124 13:52:25.896189 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.069358 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/extract/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.103623 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.147261 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.151039 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dzrdb_7cc1cad9-8e95-4b2c-bfb2-dd376178315f/marketplace-operator/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.316376 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.476420 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.478856 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.499969 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.632760 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.706025 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.731868 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-utilities/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.849914 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/registry-server/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.906820 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-content/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.911337 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-utilities/0.log" Nov 24 13:52:26 crc kubenswrapper[4756]: I1124 13:52:26.961312 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-content/0.log" Nov 24 13:52:27 crc kubenswrapper[4756]: I1124 13:52:27.134409 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-utilities/0.log" Nov 24 13:52:27 crc kubenswrapper[4756]: I1124 13:52:27.134860 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/extract-content/0.log" Nov 24 13:52:27 crc kubenswrapper[4756]: I1124 13:52:27.514585 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mhvgk_b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/registry-server/0.log" Nov 24 13:52:33 crc kubenswrapper[4756]: I1124 13:52:33.479068 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 13:52:33 crc kubenswrapper[4756]: I1124 13:52:33.479614 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 13:52:33 crc kubenswrapper[4756]: I1124 13:52:33.479668 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 13:52:33 crc kubenswrapper[4756]: I1124 13:52:33.480088 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 13:52:33 crc kubenswrapper[4756]: I1124 13:52:33.480148 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" gracePeriod=600 Nov 24 13:52:33 crc kubenswrapper[4756]: E1124 13:52:33.598973 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:52:34 crc kubenswrapper[4756]: I1124 13:52:34.531064 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" exitCode=0 Nov 24 13:52:34 crc kubenswrapper[4756]: I1124 13:52:34.531405 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626"} Nov 24 13:52:34 crc kubenswrapper[4756]: I1124 13:52:34.531438 4756 scope.go:117] "RemoveContainer" containerID="1d6c255b6871907348c9ab51d5ab6365d73f4f83df3b91fbd78ba7ad736fa552" Nov 24 13:52:34 crc kubenswrapper[4756]: I1124 13:52:34.532077 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:52:34 crc kubenswrapper[4756]: E1124 13:52:34.532328 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:52:39 crc kubenswrapper[4756]: I1124 13:52:39.956797 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-gw4jk_0f42bf51-8a6c-4390-83a6-dbae6d26126a/prometheus-operator/0.log" Nov 24 13:52:40 crc kubenswrapper[4756]: I1124 13:52:40.106704 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j_d1a8e934-b419-4e57-9311-7c8a34745da9/prometheus-operator-admission-webhook/0.log" Nov 24 13:52:40 crc kubenswrapper[4756]: I1124 13:52:40.182293 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8_7b44bcfa-0c82-4db2-b4e0-310a76be2b6f/prometheus-operator-admission-webhook/0.log" Nov 24 13:52:40 crc kubenswrapper[4756]: I1124 13:52:40.343231 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-qmk8j_7ee15342-4efd-4e6b-8569-b54b26064eaf/operator/0.log" Nov 24 13:52:40 crc kubenswrapper[4756]: I1124 13:52:40.410649 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-p8ln7_0055f07d-1546-45ad-b576-87d016490055/perses-operator/0.log" Nov 24 13:52:47 crc kubenswrapper[4756]: I1124 13:52:47.476359 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:52:47 crc kubenswrapper[4756]: E1124 13:52:47.477283 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.021356 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:52:51 crc kubenswrapper[4756]: E1124 13:52:51.022390 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="registry-server" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.022404 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="registry-server" Nov 24 13:52:51 crc kubenswrapper[4756]: E1124 13:52:51.022416 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="extract-content" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.022422 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="extract-content" Nov 24 13:52:51 crc kubenswrapper[4756]: E1124 13:52:51.022452 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="extract-utilities" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.022459 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="extract-utilities" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.022662 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ee137e-159d-442a-86e8-87bb2b6181b9" containerName="registry-server" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.027185 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.027192 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6zzm\" (UniqueName: \"kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.039433 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.078745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.078927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.181451 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6zzm\" (UniqueName: \"kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.181550 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.181628 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.182215 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.182736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.218987 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6zzm\" (UniqueName: \"kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm\") pod \"community-operators-r7zl5\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:51 crc kubenswrapper[4756]: I1124 13:52:51.408544 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:52:52 crc kubenswrapper[4756]: I1124 13:52:52.024610 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:52:52 crc kubenswrapper[4756]: I1124 13:52:52.718126 4756 generic.go:334] "Generic (PLEG): container finished" podID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerID="b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348" exitCode=0 Nov 24 13:52:52 crc kubenswrapper[4756]: I1124 13:52:52.718178 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerDied","Data":"b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348"} Nov 24 13:52:52 crc kubenswrapper[4756]: I1124 13:52:52.718640 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerStarted","Data":"653b81ee9450e3eb75ea472527a19941720302b9fc4bb015bc0f27d0b9f719a6"} Nov 24 13:52:53 crc kubenswrapper[4756]: I1124 13:52:53.729330 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerStarted","Data":"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632"} Nov 24 13:52:55 crc kubenswrapper[4756]: I1124 13:52:55.748573 4756 generic.go:334] "Generic (PLEG): container finished" podID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerID="465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632" exitCode=0 Nov 24 13:52:55 crc kubenswrapper[4756]: I1124 13:52:55.748646 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerDied","Data":"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632"} Nov 24 13:52:56 crc kubenswrapper[4756]: I1124 13:52:56.760539 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerStarted","Data":"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63"} Nov 24 13:52:56 crc kubenswrapper[4756]: I1124 13:52:56.781417 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r7zl5" podStartSLOduration=3.342707051 podStartE2EDuration="6.781399803s" podCreationTimestamp="2025-11-24 13:52:50 +0000 UTC" firstStartedPulling="2025-11-24 13:52:52.719988915 +0000 UTC m=+5105.077503057" lastFinishedPulling="2025-11-24 13:52:56.158681667 +0000 UTC m=+5108.516195809" observedRunningTime="2025-11-24 13:52:56.777762905 +0000 UTC m=+5109.135277067" watchObservedRunningTime="2025-11-24 13:52:56.781399803 +0000 UTC m=+5109.138913945" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.409495 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.410012 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.475901 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:53:01 crc kubenswrapper[4756]: E1124 13:53:01.476143 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.478406 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.850766 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:01 crc kubenswrapper[4756]: I1124 13:53:01.907518 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:53:03 crc kubenswrapper[4756]: E1124 13:53:03.552462 4756 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.200:60258->38.102.83.200:44291: write tcp 38.102.83.200:60258->38.102.83.200:44291: write: broken pipe Nov 24 13:53:03 crc kubenswrapper[4756]: I1124 13:53:03.820824 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r7zl5" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="registry-server" containerID="cri-o://4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63" gracePeriod=2 Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.303596 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.440058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities\") pod \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.440347 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content\") pod \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.440398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6zzm\" (UniqueName: \"kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm\") pod \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\" (UID: \"aa6dab6a-5e5a-45df-86ee-595cc3f93e16\") " Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.441494 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities" (OuterVolumeSpecName: "utilities") pod "aa6dab6a-5e5a-45df-86ee-595cc3f93e16" (UID: "aa6dab6a-5e5a-45df-86ee-595cc3f93e16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.449099 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm" (OuterVolumeSpecName: "kube-api-access-s6zzm") pod "aa6dab6a-5e5a-45df-86ee-595cc3f93e16" (UID: "aa6dab6a-5e5a-45df-86ee-595cc3f93e16"). InnerVolumeSpecName "kube-api-access-s6zzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.500862 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa6dab6a-5e5a-45df-86ee-595cc3f93e16" (UID: "aa6dab6a-5e5a-45df-86ee-595cc3f93e16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.542923 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6zzm\" (UniqueName: \"kubernetes.io/projected/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-kube-api-access-s6zzm\") on node \"crc\" DevicePath \"\"" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.542950 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.542959 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa6dab6a-5e5a-45df-86ee-595cc3f93e16-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.837437 4756 generic.go:334] "Generic (PLEG): container finished" podID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerID="4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63" exitCode=0 Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.837484 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerDied","Data":"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63"} Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.837515 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r7zl5" event={"ID":"aa6dab6a-5e5a-45df-86ee-595cc3f93e16","Type":"ContainerDied","Data":"653b81ee9450e3eb75ea472527a19941720302b9fc4bb015bc0f27d0b9f719a6"} Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.837535 4756 scope.go:117] "RemoveContainer" containerID="4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.837694 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r7zl5" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.888961 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.889364 4756 scope.go:117] "RemoveContainer" containerID="465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.901182 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r7zl5"] Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.913399 4756 scope.go:117] "RemoveContainer" containerID="b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.964945 4756 scope.go:117] "RemoveContainer" containerID="4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63" Nov 24 13:53:04 crc kubenswrapper[4756]: E1124 13:53:04.965481 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63\": container with ID starting with 4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63 not found: ID does not exist" containerID="4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.965521 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63"} err="failed to get container status \"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63\": rpc error: code = NotFound desc = could not find container \"4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63\": container with ID starting with 4f72a36a696a50c62e8f03d5622d708face656b690bf46b3447d23d67dd6de63 not found: ID does not exist" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.965547 4756 scope.go:117] "RemoveContainer" containerID="465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632" Nov 24 13:53:04 crc kubenswrapper[4756]: E1124 13:53:04.965845 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632\": container with ID starting with 465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632 not found: ID does not exist" containerID="465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.965885 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632"} err="failed to get container status \"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632\": rpc error: code = NotFound desc = could not find container \"465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632\": container with ID starting with 465b61dfb46df830ab341b8a5c6b599af74fc5b19441cfc3afa789a13f0bc632 not found: ID does not exist" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.965902 4756 scope.go:117] "RemoveContainer" containerID="b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348" Nov 24 13:53:04 crc kubenswrapper[4756]: E1124 13:53:04.966252 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348\": container with ID starting with b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348 not found: ID does not exist" containerID="b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348" Nov 24 13:53:04 crc kubenswrapper[4756]: I1124 13:53:04.966287 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348"} err="failed to get container status \"b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348\": rpc error: code = NotFound desc = could not find container \"b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348\": container with ID starting with b4e40ea145c99bea8648d2e5d69dd010d9eac51b6c0caa6a5d532504d11a2348 not found: ID does not exist" Nov 24 13:53:06 crc kubenswrapper[4756]: I1124 13:53:06.487144 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" path="/var/lib/kubelet/pods/aa6dab6a-5e5a-45df-86ee-595cc3f93e16/volumes" Nov 24 13:53:16 crc kubenswrapper[4756]: I1124 13:53:16.475516 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:53:16 crc kubenswrapper[4756]: E1124 13:53:16.476779 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:53:28 crc kubenswrapper[4756]: I1124 13:53:28.484152 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:53:28 crc kubenswrapper[4756]: E1124 13:53:28.485319 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:53:42 crc kubenswrapper[4756]: I1124 13:53:42.476096 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:53:42 crc kubenswrapper[4756]: E1124 13:53:42.476828 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:53:55 crc kubenswrapper[4756]: I1124 13:53:55.475903 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:53:55 crc kubenswrapper[4756]: E1124 13:53:55.476661 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:54:09 crc kubenswrapper[4756]: I1124 13:54:09.476743 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:54:09 crc kubenswrapper[4756]: E1124 13:54:09.477462 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:54:23 crc kubenswrapper[4756]: I1124 13:54:23.475669 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:54:23 crc kubenswrapper[4756]: E1124 13:54:23.476608 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:54:28 crc kubenswrapper[4756]: I1124 13:54:28.826296 4756 generic.go:334] "Generic (PLEG): container finished" podID="163191c1-015e-4d7b-a7fb-982e125285ca" containerID="5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e" exitCode=0 Nov 24 13:54:28 crc kubenswrapper[4756]: I1124 13:54:28.826910 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqv5d/must-gather-48cgw" event={"ID":"163191c1-015e-4d7b-a7fb-982e125285ca","Type":"ContainerDied","Data":"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e"} Nov 24 13:54:28 crc kubenswrapper[4756]: I1124 13:54:28.827689 4756 scope.go:117] "RemoveContainer" containerID="5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e" Nov 24 13:54:29 crc kubenswrapper[4756]: I1124 13:54:29.526923 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vqv5d_must-gather-48cgw_163191c1-015e-4d7b-a7fb-982e125285ca/gather/0.log" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.388062 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vqv5d/must-gather-48cgw"] Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.389057 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vqv5d/must-gather-48cgw" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="copy" containerID="cri-o://e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85" gracePeriod=2 Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.411456 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vqv5d/must-gather-48cgw"] Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.834196 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vqv5d_must-gather-48cgw_163191c1-015e-4d7b-a7fb-982e125285ca/copy/0.log" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.834631 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.911128 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzphc\" (UniqueName: \"kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc\") pod \"163191c1-015e-4d7b-a7fb-982e125285ca\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.911501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output\") pod \"163191c1-015e-4d7b-a7fb-982e125285ca\" (UID: \"163191c1-015e-4d7b-a7fb-982e125285ca\") " Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.917183 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc" (OuterVolumeSpecName: "kube-api-access-bzphc") pod "163191c1-015e-4d7b-a7fb-982e125285ca" (UID: "163191c1-015e-4d7b-a7fb-982e125285ca"). InnerVolumeSpecName "kube-api-access-bzphc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.932807 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vqv5d_must-gather-48cgw_163191c1-015e-4d7b-a7fb-982e125285ca/copy/0.log" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.933459 4756 generic.go:334] "Generic (PLEG): container finished" podID="163191c1-015e-4d7b-a7fb-982e125285ca" containerID="e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85" exitCode=143 Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.933522 4756 scope.go:117] "RemoveContainer" containerID="e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.933707 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqv5d/must-gather-48cgw" Nov 24 13:54:37 crc kubenswrapper[4756]: I1124 13:54:37.985958 4756 scope.go:117] "RemoveContainer" containerID="5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.014296 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzphc\" (UniqueName: \"kubernetes.io/projected/163191c1-015e-4d7b-a7fb-982e125285ca-kube-api-access-bzphc\") on node \"crc\" DevicePath \"\"" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.064279 4756 scope.go:117] "RemoveContainer" containerID="e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85" Nov 24 13:54:38 crc kubenswrapper[4756]: E1124 13:54:38.065022 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85\": container with ID starting with e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85 not found: ID does not exist" containerID="e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.065324 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85"} err="failed to get container status \"e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85\": rpc error: code = NotFound desc = could not find container \"e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85\": container with ID starting with e4ca40aeb1decdcd466b081c35788dd2e25fef4c8fd208c1570f9c8b57308d85 not found: ID does not exist" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.065443 4756 scope.go:117] "RemoveContainer" containerID="5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e" Nov 24 13:54:38 crc kubenswrapper[4756]: E1124 13:54:38.068799 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e\": container with ID starting with 5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e not found: ID does not exist" containerID="5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.068846 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e"} err="failed to get container status \"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e\": rpc error: code = NotFound desc = could not find container \"5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e\": container with ID starting with 5f3801d5e81249c06e7acff00e0d28390b5eb585b8e7c7af4eab3023fa56ff2e not found: ID does not exist" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.100617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "163191c1-015e-4d7b-a7fb-982e125285ca" (UID: "163191c1-015e-4d7b-a7fb-982e125285ca"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.116530 4756 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/163191c1-015e-4d7b-a7fb-982e125285ca-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.484915 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:54:38 crc kubenswrapper[4756]: E1124 13:54:38.485277 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:54:38 crc kubenswrapper[4756]: I1124 13:54:38.502383 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" path="/var/lib/kubelet/pods/163191c1-015e-4d7b-a7fb-982e125285ca/volumes" Nov 24 13:54:52 crc kubenswrapper[4756]: I1124 13:54:52.476723 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:54:52 crc kubenswrapper[4756]: E1124 13:54:52.477819 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:55:07 crc kubenswrapper[4756]: I1124 13:55:07.476541 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:55:07 crc kubenswrapper[4756]: E1124 13:55:07.477548 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:55:21 crc kubenswrapper[4756]: I1124 13:55:21.475375 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:55:21 crc kubenswrapper[4756]: E1124 13:55:21.476346 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:55:35 crc kubenswrapper[4756]: I1124 13:55:35.475898 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:55:35 crc kubenswrapper[4756]: E1124 13:55:35.476721 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:55:46 crc kubenswrapper[4756]: I1124 13:55:46.476877 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:55:46 crc kubenswrapper[4756]: E1124 13:55:46.477987 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:55:57 crc kubenswrapper[4756]: I1124 13:55:57.475229 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:55:57 crc kubenswrapper[4756]: E1124 13:55:57.475887 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:56:12 crc kubenswrapper[4756]: I1124 13:56:12.476450 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:56:12 crc kubenswrapper[4756]: E1124 13:56:12.477680 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:56:21 crc kubenswrapper[4756]: I1124 13:56:21.571528 4756 scope.go:117] "RemoveContainer" containerID="0ce87aec85a8cf37a77a462fe0d903055003d0e07b19e9d20b13725bf741fc7a" Nov 24 13:56:21 crc kubenswrapper[4756]: I1124 13:56:21.608973 4756 scope.go:117] "RemoveContainer" containerID="daab870c54eef05ba5273dbba3255221b6695616f5a646331ead48077a41201a" Nov 24 13:56:21 crc kubenswrapper[4756]: I1124 13:56:21.627331 4756 scope.go:117] "RemoveContainer" containerID="715340bbe79e96ae9847d313cd9058c7302dff7d8fe78f5427d9c954d395a665" Nov 24 13:56:25 crc kubenswrapper[4756]: I1124 13:56:25.475818 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:56:25 crc kubenswrapper[4756]: E1124 13:56:25.476920 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.690753 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-842fx"] Nov 24 13:56:30 crc kubenswrapper[4756]: E1124 13:56:30.692044 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="gather" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692059 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="gather" Nov 24 13:56:30 crc kubenswrapper[4756]: E1124 13:56:30.692092 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="extract-utilities" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692100 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="extract-utilities" Nov 24 13:56:30 crc kubenswrapper[4756]: E1124 13:56:30.692120 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="copy" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692128 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="copy" Nov 24 13:56:30 crc kubenswrapper[4756]: E1124 13:56:30.692147 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="extract-content" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692171 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="extract-content" Nov 24 13:56:30 crc kubenswrapper[4756]: E1124 13:56:30.692197 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="registry-server" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692206 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="registry-server" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692454 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="gather" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692483 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6dab6a-5e5a-45df-86ee-595cc3f93e16" containerName="registry-server" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.692504 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="163191c1-015e-4d7b-a7fb-982e125285ca" containerName="copy" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.694368 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.703695 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-842fx"] Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.770430 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-catalog-content\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.770494 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grd29\" (UniqueName: \"kubernetes.io/projected/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-kube-api-access-grd29\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.771061 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-utilities\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.872797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-utilities\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.872869 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-catalog-content\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.872913 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grd29\" (UniqueName: \"kubernetes.io/projected/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-kube-api-access-grd29\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.873481 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-utilities\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.873486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-catalog-content\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:30 crc kubenswrapper[4756]: I1124 13:56:30.895362 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grd29\" (UniqueName: \"kubernetes.io/projected/8d4ff55b-5d27-4ecd-bc71-9a91650d0e60-kube-api-access-grd29\") pod \"redhat-operators-842fx\" (UID: \"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60\") " pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:31 crc kubenswrapper[4756]: I1124 13:56:31.032218 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:32 crc kubenswrapper[4756]: I1124 13:56:32.119039 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-842fx"] Nov 24 13:56:32 crc kubenswrapper[4756]: I1124 13:56:32.223488 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842fx" event={"ID":"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60","Type":"ContainerStarted","Data":"a4bff718a599b799dd0f5d6021b646e77c1d9e68382a899aa46f01f66bf3e2bb"} Nov 24 13:56:33 crc kubenswrapper[4756]: I1124 13:56:33.235523 4756 generic.go:334] "Generic (PLEG): container finished" podID="8d4ff55b-5d27-4ecd-bc71-9a91650d0e60" containerID="fd6d802e8d34f1aa70ea2c13de91142f1dc8a1feed1265f00ad6cbcaac3eec91" exitCode=0 Nov 24 13:56:33 crc kubenswrapper[4756]: I1124 13:56:33.235615 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842fx" event={"ID":"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60","Type":"ContainerDied","Data":"fd6d802e8d34f1aa70ea2c13de91142f1dc8a1feed1265f00ad6cbcaac3eec91"} Nov 24 13:56:33 crc kubenswrapper[4756]: I1124 13:56:33.238692 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 13:56:36 crc kubenswrapper[4756]: I1124 13:56:36.475594 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:56:36 crc kubenswrapper[4756]: E1124 13:56:36.476760 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:56:40 crc kubenswrapper[4756]: I1124 13:56:40.308498 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842fx" event={"ID":"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60","Type":"ContainerStarted","Data":"3240529bb2c8e0b88b7e2abecd13a77e4a66a3f260403e252a4b2993ae63bfbc"} Nov 24 13:56:41 crc kubenswrapper[4756]: I1124 13:56:41.325746 4756 generic.go:334] "Generic (PLEG): container finished" podID="8d4ff55b-5d27-4ecd-bc71-9a91650d0e60" containerID="3240529bb2c8e0b88b7e2abecd13a77e4a66a3f260403e252a4b2993ae63bfbc" exitCode=0 Nov 24 13:56:41 crc kubenswrapper[4756]: I1124 13:56:41.325827 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842fx" event={"ID":"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60","Type":"ContainerDied","Data":"3240529bb2c8e0b88b7e2abecd13a77e4a66a3f260403e252a4b2993ae63bfbc"} Nov 24 13:56:45 crc kubenswrapper[4756]: I1124 13:56:45.380176 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-842fx" event={"ID":"8d4ff55b-5d27-4ecd-bc71-9a91650d0e60","Type":"ContainerStarted","Data":"f065a40998caa56237a602b349189c88bf2cdbf54c9e9cb41582c1747fd04f60"} Nov 24 13:56:49 crc kubenswrapper[4756]: I1124 13:56:49.689862 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:56:49 crc kubenswrapper[4756]: E1124 13:56:49.691328 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.033039 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.033427 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.124137 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.155641 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-842fx" podStartSLOduration=10.060622492 podStartE2EDuration="21.155610947s" podCreationTimestamp="2025-11-24 13:56:30 +0000 UTC" firstStartedPulling="2025-11-24 13:56:33.238403608 +0000 UTC m=+5325.595917750" lastFinishedPulling="2025-11-24 13:56:44.333392033 +0000 UTC m=+5336.690906205" observedRunningTime="2025-11-24 13:56:45.407601661 +0000 UTC m=+5337.765115813" watchObservedRunningTime="2025-11-24 13:56:51.155610947 +0000 UTC m=+5343.513125129" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.789810 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-842fx" Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.865708 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-842fx"] Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.920967 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:56:51 crc kubenswrapper[4756]: I1124 13:56:51.921255 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mhvgk" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="registry-server" containerID="cri-o://107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6" gracePeriod=2 Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.403479 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.490527 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content\") pod \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.490638 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities\") pod \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.490669 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lchvg\" (UniqueName: \"kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg\") pod \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\" (UID: \"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0\") " Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.493234 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities" (OuterVolumeSpecName: "utilities") pod "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" (UID: "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.495856 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.506534 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg" (OuterVolumeSpecName: "kube-api-access-lchvg") pod "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" (UID: "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0"). InnerVolumeSpecName "kube-api-access-lchvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.598787 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lchvg\" (UniqueName: \"kubernetes.io/projected/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-kube-api-access-lchvg\") on node \"crc\" DevicePath \"\"" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.602178 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" (UID: "b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.700395 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.732108 4756 generic.go:334] "Generic (PLEG): container finished" podID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerID="107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6" exitCode=0 Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.732204 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhvgk" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.732283 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerDied","Data":"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6"} Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.732345 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhvgk" event={"ID":"b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0","Type":"ContainerDied","Data":"14e8d0f9aff8586d603998edde1dbc995f3b529ca507debec03b4777c3b6c5c1"} Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.732371 4756 scope.go:117] "RemoveContainer" containerID="107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.756400 4756 scope.go:117] "RemoveContainer" containerID="7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.767464 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.775518 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mhvgk"] Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.793217 4756 scope.go:117] "RemoveContainer" containerID="e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.831121 4756 scope.go:117] "RemoveContainer" containerID="107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6" Nov 24 13:56:52 crc kubenswrapper[4756]: E1124 13:56:52.831839 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6\": container with ID starting with 107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6 not found: ID does not exist" containerID="107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.831881 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6"} err="failed to get container status \"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6\": rpc error: code = NotFound desc = could not find container \"107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6\": container with ID starting with 107bda56961a4db004b495580d2c0e1255f4487a7df444edb78fea5f443703a6 not found: ID does not exist" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.831907 4756 scope.go:117] "RemoveContainer" containerID="7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4" Nov 24 13:56:52 crc kubenswrapper[4756]: E1124 13:56:52.832211 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4\": container with ID starting with 7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4 not found: ID does not exist" containerID="7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.832255 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4"} err="failed to get container status \"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4\": rpc error: code = NotFound desc = could not find container \"7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4\": container with ID starting with 7292f37b83b11a62e82a4f896a15b4b19b372a1dfc02035fdc5269f3421a5ce4 not found: ID does not exist" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.832282 4756 scope.go:117] "RemoveContainer" containerID="e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886" Nov 24 13:56:52 crc kubenswrapper[4756]: E1124 13:56:52.832592 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886\": container with ID starting with e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886 not found: ID does not exist" containerID="e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886" Nov 24 13:56:52 crc kubenswrapper[4756]: I1124 13:56:52.832626 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886"} err="failed to get container status \"e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886\": rpc error: code = NotFound desc = could not find container \"e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886\": container with ID starting with e9011b2836898bef51e5f1bcd026fc13d8a6ea42e1854988458b0765ce29e886 not found: ID does not exist" Nov 24 13:56:54 crc kubenswrapper[4756]: I1124 13:56:54.488744 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" path="/var/lib/kubelet/pods/b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0/volumes" Nov 24 13:57:00 crc kubenswrapper[4756]: I1124 13:57:00.476558 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:57:00 crc kubenswrapper[4756]: E1124 13:57:00.477683 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:57:14 crc kubenswrapper[4756]: I1124 13:57:14.475430 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:57:14 crc kubenswrapper[4756]: E1124 13:57:14.476421 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:57:27 crc kubenswrapper[4756]: I1124 13:57:27.475529 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:57:27 crc kubenswrapper[4756]: E1124 13:57:27.476318 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.465134 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8fnz6/must-gather-tptz7"] Nov 24 13:57:33 crc kubenswrapper[4756]: E1124 13:57:33.467031 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="extract-utilities" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.467134 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="extract-utilities" Nov 24 13:57:33 crc kubenswrapper[4756]: E1124 13:57:33.467238 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="extract-content" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.467316 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="extract-content" Nov 24 13:57:33 crc kubenswrapper[4756]: E1124 13:57:33.467422 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="registry-server" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.467501 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="registry-server" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.467820 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6455e35-9ec6-4ea5-b6a9-fd0e148d55b0" containerName="registry-server" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.469031 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.482310 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8fnz6"/"kube-root-ca.crt" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.483452 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8fnz6"/"openshift-service-ca.crt" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.496183 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8fnz6/must-gather-tptz7"] Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.525202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.525350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8hx\" (UniqueName: \"kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.627369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.627459 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8hx\" (UniqueName: \"kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.627848 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.648270 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8hx\" (UniqueName: \"kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx\") pod \"must-gather-tptz7\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:33 crc kubenswrapper[4756]: I1124 13:57:33.802460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 13:57:34 crc kubenswrapper[4756]: I1124 13:57:34.294497 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8fnz6/must-gather-tptz7"] Nov 24 13:57:35 crc kubenswrapper[4756]: I1124 13:57:35.203300 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/must-gather-tptz7" event={"ID":"0f3b518a-c3b0-47d5-84cc-8db7970134cc","Type":"ContainerStarted","Data":"9bf4eb765a6e218d342c2f146f1d33d9958275e04c24d24eb4a25dd1c585a33c"} Nov 24 13:57:35 crc kubenswrapper[4756]: I1124 13:57:35.203894 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/must-gather-tptz7" event={"ID":"0f3b518a-c3b0-47d5-84cc-8db7970134cc","Type":"ContainerStarted","Data":"dd54f9e5746011226c38b433e1f9451131e79c02e02b2992f793d264976bb73a"} Nov 24 13:57:35 crc kubenswrapper[4756]: I1124 13:57:35.203911 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/must-gather-tptz7" event={"ID":"0f3b518a-c3b0-47d5-84cc-8db7970134cc","Type":"ContainerStarted","Data":"2e85de58c6dbf69276b68c7b68e1c7d5a26ba3387d00065fa356c8316c56dd19"} Nov 24 13:57:35 crc kubenswrapper[4756]: I1124 13:57:35.224794 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8fnz6/must-gather-tptz7" podStartSLOduration=2.224774895 podStartE2EDuration="2.224774895s" podCreationTimestamp="2025-11-24 13:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:35.21899012 +0000 UTC m=+5387.576504302" watchObservedRunningTime="2025-11-24 13:57:35.224774895 +0000 UTC m=+5387.582289057" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.675782 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-mkb5g"] Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.679272 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.685859 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8fnz6"/"default-dockercfg-cf58n" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.820976 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.821116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7zf\" (UniqueName: \"kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.923093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.923250 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.923298 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7zf\" (UniqueName: \"kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:38 crc kubenswrapper[4756]: I1124 13:57:38.944969 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7zf\" (UniqueName: \"kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf\") pod \"crc-debug-mkb5g\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:39 crc kubenswrapper[4756]: I1124 13:57:39.002130 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:57:39 crc kubenswrapper[4756]: I1124 13:57:39.258686 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" event={"ID":"61fddb20-de91-4897-b324-542f666652ae","Type":"ContainerStarted","Data":"c06322584bdaa1c469b344d56e29dcb3c570b778aecf142467b861a3db5f71a2"} Nov 24 13:57:39 crc kubenswrapper[4756]: I1124 13:57:39.480328 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 13:57:40 crc kubenswrapper[4756]: I1124 13:57:40.291834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" event={"ID":"61fddb20-de91-4897-b324-542f666652ae","Type":"ContainerStarted","Data":"c6e408dc2d55898cb8e10429713c886b15b3537178f38fdccb4843d12809333e"} Nov 24 13:57:40 crc kubenswrapper[4756]: I1124 13:57:40.313458 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018"} Nov 24 13:57:40 crc kubenswrapper[4756]: I1124 13:57:40.323080 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" podStartSLOduration=2.323056227 podStartE2EDuration="2.323056227s" podCreationTimestamp="2025-11-24 13:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:40.314223 +0000 UTC m=+5392.671737172" watchObservedRunningTime="2025-11-24 13:57:40.323056227 +0000 UTC m=+5392.680570379" Nov 24 13:58:19 crc kubenswrapper[4756]: I1124 13:58:19.673985 4756 generic.go:334] "Generic (PLEG): container finished" podID="61fddb20-de91-4897-b324-542f666652ae" containerID="c6e408dc2d55898cb8e10429713c886b15b3537178f38fdccb4843d12809333e" exitCode=0 Nov 24 13:58:19 crc kubenswrapper[4756]: I1124 13:58:19.674139 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" event={"ID":"61fddb20-de91-4897-b324-542f666652ae","Type":"ContainerDied","Data":"c6e408dc2d55898cb8e10429713c886b15b3537178f38fdccb4843d12809333e"} Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.812555 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.848302 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-mkb5g"] Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.856485 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-mkb5g"] Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.955493 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host\") pod \"61fddb20-de91-4897-b324-542f666652ae\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.955610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host" (OuterVolumeSpecName: "host") pod "61fddb20-de91-4897-b324-542f666652ae" (UID: "61fddb20-de91-4897-b324-542f666652ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.955790 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd7zf\" (UniqueName: \"kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf\") pod \"61fddb20-de91-4897-b324-542f666652ae\" (UID: \"61fddb20-de91-4897-b324-542f666652ae\") " Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.956299 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61fddb20-de91-4897-b324-542f666652ae-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:20 crc kubenswrapper[4756]: I1124 13:58:20.961018 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf" (OuterVolumeSpecName: "kube-api-access-qd7zf") pod "61fddb20-de91-4897-b324-542f666652ae" (UID: "61fddb20-de91-4897-b324-542f666652ae"). InnerVolumeSpecName "kube-api-access-qd7zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:58:21 crc kubenswrapper[4756]: I1124 13:58:21.059534 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd7zf\" (UniqueName: \"kubernetes.io/projected/61fddb20-de91-4897-b324-542f666652ae-kube-api-access-qd7zf\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:21 crc kubenswrapper[4756]: I1124 13:58:21.701470 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06322584bdaa1c469b344d56e29dcb3c570b778aecf142467b861a3db5f71a2" Nov 24 13:58:21 crc kubenswrapper[4756]: I1124 13:58:21.701515 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-mkb5g" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.026480 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-9wbwn"] Nov 24 13:58:22 crc kubenswrapper[4756]: E1124 13:58:22.027053 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fddb20-de91-4897-b324-542f666652ae" containerName="container-00" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.027067 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fddb20-de91-4897-b324-542f666652ae" containerName="container-00" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.027296 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fddb20-de91-4897-b324-542f666652ae" containerName="container-00" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.027947 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.030353 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8fnz6"/"default-dockercfg-cf58n" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.178321 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.178566 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m8g2\" (UniqueName: \"kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.280335 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m8g2\" (UniqueName: \"kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.280520 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.280585 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.303109 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m8g2\" (UniqueName: \"kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2\") pod \"crc-debug-9wbwn\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.349255 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:22 crc kubenswrapper[4756]: W1124 13:58:22.396018 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94f43262_1757_4b42_9f76_838fc9d3d96b.slice/crio-abc60a178a1fcc4d5f787974a61f198e6e48932bebdc535403408b845fd18bd9 WatchSource:0}: Error finding container abc60a178a1fcc4d5f787974a61f198e6e48932bebdc535403408b845fd18bd9: Status 404 returned error can't find the container with id abc60a178a1fcc4d5f787974a61f198e6e48932bebdc535403408b845fd18bd9 Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.486940 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fddb20-de91-4897-b324-542f666652ae" path="/var/lib/kubelet/pods/61fddb20-de91-4897-b324-542f666652ae/volumes" Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.711086 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" event={"ID":"94f43262-1757-4b42-9f76-838fc9d3d96b","Type":"ContainerStarted","Data":"e4f54fa401158c2da56a9365e243db2b74512538175c8756205ecb931c768b09"} Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.711147 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" event={"ID":"94f43262-1757-4b42-9f76-838fc9d3d96b","Type":"ContainerStarted","Data":"abc60a178a1fcc4d5f787974a61f198e6e48932bebdc535403408b845fd18bd9"} Nov 24 13:58:22 crc kubenswrapper[4756]: I1124 13:58:22.735070 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" podStartSLOduration=0.735043203 podStartE2EDuration="735.043203ms" podCreationTimestamp="2025-11-24 13:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:58:22.72486902 +0000 UTC m=+5435.082383162" watchObservedRunningTime="2025-11-24 13:58:22.735043203 +0000 UTC m=+5435.092557345" Nov 24 13:58:23 crc kubenswrapper[4756]: I1124 13:58:23.722552 4756 generic.go:334] "Generic (PLEG): container finished" podID="94f43262-1757-4b42-9f76-838fc9d3d96b" containerID="e4f54fa401158c2da56a9365e243db2b74512538175c8756205ecb931c768b09" exitCode=0 Nov 24 13:58:23 crc kubenswrapper[4756]: I1124 13:58:23.722911 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" event={"ID":"94f43262-1757-4b42-9f76-838fc9d3d96b","Type":"ContainerDied","Data":"e4f54fa401158c2da56a9365e243db2b74512538175c8756205ecb931c768b09"} Nov 24 13:58:24 crc kubenswrapper[4756]: I1124 13:58:24.841302 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:24 crc kubenswrapper[4756]: I1124 13:58:24.930374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m8g2\" (UniqueName: \"kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2\") pod \"94f43262-1757-4b42-9f76-838fc9d3d96b\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " Nov 24 13:58:24 crc kubenswrapper[4756]: I1124 13:58:24.930552 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host\") pod \"94f43262-1757-4b42-9f76-838fc9d3d96b\" (UID: \"94f43262-1757-4b42-9f76-838fc9d3d96b\") " Nov 24 13:58:24 crc kubenswrapper[4756]: I1124 13:58:24.931025 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host" (OuterVolumeSpecName: "host") pod "94f43262-1757-4b42-9f76-838fc9d3d96b" (UID: "94f43262-1757-4b42-9f76-838fc9d3d96b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:58:24 crc kubenswrapper[4756]: I1124 13:58:24.936863 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2" (OuterVolumeSpecName: "kube-api-access-6m8g2") pod "94f43262-1757-4b42-9f76-838fc9d3d96b" (UID: "94f43262-1757-4b42-9f76-838fc9d3d96b"). InnerVolumeSpecName "kube-api-access-6m8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.035533 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/94f43262-1757-4b42-9f76-838fc9d3d96b-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.035565 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m8g2\" (UniqueName: \"kubernetes.io/projected/94f43262-1757-4b42-9f76-838fc9d3d96b-kube-api-access-6m8g2\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.397319 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-9wbwn"] Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.404923 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-9wbwn"] Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.744104 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc60a178a1fcc4d5f787974a61f198e6e48932bebdc535403408b845fd18bd9" Nov 24 13:58:25 crc kubenswrapper[4756]: I1124 13:58:25.744192 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-9wbwn" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.503761 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f43262-1757-4b42-9f76-838fc9d3d96b" path="/var/lib/kubelet/pods/94f43262-1757-4b42-9f76-838fc9d3d96b/volumes" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.575956 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-xcnnb"] Nov 24 13:58:26 crc kubenswrapper[4756]: E1124 13:58:26.576516 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f43262-1757-4b42-9f76-838fc9d3d96b" containerName="container-00" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.576543 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f43262-1757-4b42-9f76-838fc9d3d96b" containerName="container-00" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.576915 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f43262-1757-4b42-9f76-838fc9d3d96b" containerName="container-00" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.577918 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.580104 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8fnz6"/"default-dockercfg-cf58n" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.665809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbr4\" (UniqueName: \"kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.666115 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.767739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbr4\" (UniqueName: \"kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.767802 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.767939 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.791110 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbr4\" (UniqueName: \"kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4\") pod \"crc-debug-xcnnb\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: I1124 13:58:26.909802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:26 crc kubenswrapper[4756]: W1124 13:58:26.937938 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda398f678_abd5_4484_bcfd_e601cc8c572a.slice/crio-050d675c8fd45799505f110fdd1a9ced44807a600c197b47fbffb2a283a9ab16 WatchSource:0}: Error finding container 050d675c8fd45799505f110fdd1a9ced44807a600c197b47fbffb2a283a9ab16: Status 404 returned error can't find the container with id 050d675c8fd45799505f110fdd1a9ced44807a600c197b47fbffb2a283a9ab16 Nov 24 13:58:27 crc kubenswrapper[4756]: I1124 13:58:27.763041 4756 generic.go:334] "Generic (PLEG): container finished" podID="a398f678-abd5-4484-bcfd-e601cc8c572a" containerID="465d0466a233d68105295c6f5b93b61edacb81f53cf6ef6c60f1d4d3981c8a8a" exitCode=0 Nov 24 13:58:27 crc kubenswrapper[4756]: I1124 13:58:27.763119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" event={"ID":"a398f678-abd5-4484-bcfd-e601cc8c572a","Type":"ContainerDied","Data":"465d0466a233d68105295c6f5b93b61edacb81f53cf6ef6c60f1d4d3981c8a8a"} Nov 24 13:58:27 crc kubenswrapper[4756]: I1124 13:58:27.763378 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" event={"ID":"a398f678-abd5-4484-bcfd-e601cc8c572a","Type":"ContainerStarted","Data":"050d675c8fd45799505f110fdd1a9ced44807a600c197b47fbffb2a283a9ab16"} Nov 24 13:58:27 crc kubenswrapper[4756]: I1124 13:58:27.805931 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-xcnnb"] Nov 24 13:58:27 crc kubenswrapper[4756]: I1124 13:58:27.813906 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8fnz6/crc-debug-xcnnb"] Nov 24 13:58:28 crc kubenswrapper[4756]: I1124 13:58:28.920638 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.012613 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbr4\" (UniqueName: \"kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4\") pod \"a398f678-abd5-4484-bcfd-e601cc8c572a\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.012870 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host\") pod \"a398f678-abd5-4484-bcfd-e601cc8c572a\" (UID: \"a398f678-abd5-4484-bcfd-e601cc8c572a\") " Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.013017 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host" (OuterVolumeSpecName: "host") pod "a398f678-abd5-4484-bcfd-e601cc8c572a" (UID: "a398f678-abd5-4484-bcfd-e601cc8c572a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.013459 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a398f678-abd5-4484-bcfd-e601cc8c572a-host\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.018820 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4" (OuterVolumeSpecName: "kube-api-access-gwbr4") pod "a398f678-abd5-4484-bcfd-e601cc8c572a" (UID: "a398f678-abd5-4484-bcfd-e601cc8c572a"). InnerVolumeSpecName "kube-api-access-gwbr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.116216 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbr4\" (UniqueName: \"kubernetes.io/projected/a398f678-abd5-4484-bcfd-e601cc8c572a-kube-api-access-gwbr4\") on node \"crc\" DevicePath \"\"" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.790793 4756 scope.go:117] "RemoveContainer" containerID="465d0466a233d68105295c6f5b93b61edacb81f53cf6ef6c60f1d4d3981c8a8a" Nov 24 13:58:29 crc kubenswrapper[4756]: I1124 13:58:29.790875 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/crc-debug-xcnnb" Nov 24 13:58:30 crc kubenswrapper[4756]: I1124 13:58:30.489360 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a398f678-abd5-4484-bcfd-e601cc8c572a" path="/var/lib/kubelet/pods/a398f678-abd5-4484-bcfd-e601cc8c572a/volumes" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.229899 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ddf8b8dd6-scmvb_767d77ce-06bb-44dc-b47f-229303527133/barbican-api/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.387634 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ddf8b8dd6-scmvb_767d77ce-06bb-44dc-b47f-229303527133/barbican-api-log/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.547195 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77956fdfb6-wlggx_fb164396-9603-40ac-a47b-5b8feb1be35c/barbican-keystone-listener-log/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.564406 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77956fdfb6-wlggx_fb164396-9603-40ac-a47b-5b8feb1be35c/barbican-keystone-listener/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.692968 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-568b98fff9-ngjr7_2ab695f1-c645-42dc-be38-2935fbe4977d/barbican-worker/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.762876 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-568b98fff9-ngjr7_2ab695f1-c645-42dc-be38-2935fbe4977d/barbican-worker-log/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.899900 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4drlc_4e481796-37f1-413f-8274-2d32d2f3ef5c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:05 crc kubenswrapper[4756]: I1124 13:59:05.968437 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/ceilometer-central-agent/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.072011 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/ceilometer-notification-agent/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.130145 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/proxy-httpd/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.146012 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f1e15270-1d58-42bb-ad0a-635726bae163/sg-core/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.345253 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348/cinder-api-log/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.385517 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7ec1c9cb-5c11-4699-a7e5-6ec9e8a46348/cinder-api/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.538222 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_202e89fe-1aa2-462b-b3cf-2d71151c8de9/cinder-scheduler/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.629346 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_202e89fe-1aa2-462b-b3cf-2d71151c8de9/probe/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.689021 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hjzpg_529df660-5b77-4ba7-b190-02acb8a8de9c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:06 crc kubenswrapper[4756]: I1124 13:59:06.827473 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pq4dj_2750f3ce-2cc3-41e8-a2b5-7a96e17c9842/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.091777 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/init/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.319842 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/init/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.345233 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-dvbcg_a4fc331b-d9d7-4748-b1ef-2fae03d9b525/dnsmasq-dns/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.378462 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nbqwh_26be1a13-f657-4240-ba64-a260d9a6355a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.571825 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_505721db-c67e-42b6-b508-11cd950bc272/glance-log/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.596097 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_505721db-c67e-42b6-b508-11cd950bc272/glance-httpd/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.778611 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3add353c-985b-4ed2-9bcf-a64e03c5479a/glance-httpd/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.805098 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3add353c-985b-4ed2-9bcf-a64e03c5479a/glance-log/0.log" Nov 24 13:59:07 crc kubenswrapper[4756]: I1124 13:59:07.928936 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-585c6478b8-gsbzg_6ae02ece-f457-4943-92fe-9569b5083f41/horizon/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.154972 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pdmgd_07b45682-fb34-44c2-8fa1-fcf25559773e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.381291 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-z5l4l_6d74c7df-a689-4879-9319-a808a2f726cb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.606102 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399821-rlf75_c6cdb570-24c3-419d-b75b-7bd66ec283a3/keystone-cron/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.649092 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-585c6478b8-gsbzg_6ae02ece-f457-4943-92fe-9569b5083f41/horizon-log/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.823387 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d6fdbd64-1ed5-48d3-a245-a13416afe4d9/kube-state-metrics/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.935548 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-cnghh_9956cbef-8286-4c85-9c91-4e476d82d3d9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:08 crc kubenswrapper[4756]: I1124 13:59:08.955556 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5995df89cc-sxcgq_77785a15-6850-4685-8fbd-b129153baa32/keystone-api/0.log" Nov 24 13:59:09 crc kubenswrapper[4756]: I1124 13:59:09.423084 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4ddfdbf7-zd9s5_d2f5b7c5-30dd-4145-a18a-fe929e4d660a/neutron-httpd/0.log" Nov 24 13:59:09 crc kubenswrapper[4756]: I1124 13:59:09.489097 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4pj6_7cf15818-9c96-4bbe-bb89-6d26aff5bfbe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:09 crc kubenswrapper[4756]: I1124 13:59:09.509301 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b4ddfdbf7-zd9s5_d2f5b7c5-30dd-4145-a18a-fe929e4d660a/neutron-api/0.log" Nov 24 13:59:10 crc kubenswrapper[4756]: I1124 13:59:10.070059 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c79e91c5-3d0d-4e9a-84d0-0e00e0875fb3/nova-cell0-conductor-conductor/0.log" Nov 24 13:59:10 crc kubenswrapper[4756]: I1124 13:59:10.496654 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_94da8bb7-f5c5-4411-be93-40a15bb4c121/nova-cell1-conductor-conductor/0.log" Nov 24 13:59:10 crc kubenswrapper[4756]: I1124 13:59:10.752795 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_54bdc4b7-e42a-49b9-b81e-d817f3c08555/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 13:59:10 crc kubenswrapper[4756]: I1124 13:59:10.781937 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c8190147-b7ca-47e1-86f0-54dad2dbc996/nova-api-log/0.log" Nov 24 13:59:10 crc kubenswrapper[4756]: I1124 13:59:10.945337 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mljzt_0cc3b9cc-6392-479d-bb83-af7c5fe6d79d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:11 crc kubenswrapper[4756]: I1124 13:59:11.120394 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbc35708-5fe2-4f73-b7f0-958f40e12f63/nova-metadata-log/0.log" Nov 24 13:59:11 crc kubenswrapper[4756]: I1124 13:59:11.188741 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c8190147-b7ca-47e1-86f0-54dad2dbc996/nova-api-api/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.034583 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/mysql-bootstrap/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.266423 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/mysql-bootstrap/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.271983 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_80020f7a-2503-4446-84ea-148cb2bac0be/galera/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.402394 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_18d356c0-8e84-4ec3-b61c-bef4f3906505/nova-scheduler-scheduler/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.500917 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/mysql-bootstrap/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.756343 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/mysql-bootstrap/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.816272 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ab56de7f-fdfb-4e8b-9867-8e1e47b8ca45/galera/0.log" Nov 24 13:59:12 crc kubenswrapper[4756]: I1124 13:59:12.932111 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_08b95fcc-45c6-4618-bdad-3fb8c095e753/openstackclient/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.308760 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbc35708-5fe2-4f73-b7f0-958f40e12f63/nova-metadata-metadata/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.395714 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2lk9k_f9af141a-c02a-4457-b68e-111765a62280/ovn-controller/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.449963 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jldfh_fb0e2af7-5d32-48ae-9f03-91233a28ed8e/openstack-network-exporter/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.731978 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server-init/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.949139 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.956652 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovsdb-server-init/0.log" Nov 24 13:59:13 crc kubenswrapper[4756]: I1124 13:59:13.957282 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v5r4t_2d0b4104-d3c6-4219-b239-a52830b8429b/ovs-vswitchd/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.193756 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mp7tr_e0a0f4dd-db57-4645-9b06-51c0416636f4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.204951 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6f03b394-8de8-41e4-9cbe-a09bc8e922ad/openstack-network-exporter/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.273095 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6f03b394-8de8-41e4-9cbe-a09bc8e922ad/ovn-northd/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.385939 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f016c6c2-d6cf-42ff-a700-314a97bb1bcc/openstack-network-exporter/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.439546 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f016c6c2-d6cf-42ff-a700-314a97bb1bcc/ovsdbserver-nb/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.670870 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_57506583-001b-4baf-b8b1-6cd4fc282472/ovsdbserver-sb/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.674711 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_57506583-001b-4baf-b8b1-6cd4fc282472/openstack-network-exporter/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.921345 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8c9745c6-b4wdj_b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7/placement-api/0.log" Nov 24 13:59:14 crc kubenswrapper[4756]: I1124 13:59:14.964033 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/init-config-reloader/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.066377 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f8c9745c6-b4wdj_b6fa53ab-4c45-498e-bdeb-6b13c30ea7f7/placement-log/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.190689 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/config-reloader/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.203090 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/init-config-reloader/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.206938 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/prometheus/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.268872 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_53e70a60-dcdd-4bec-b24f-b37ed751bb90/thanos-sidecar/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.395413 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/setup-container/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.623446 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/setup-container/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.647434 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c56614-94b5-4a4b-843b-0941f1899ad8/rabbitmq/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.673085 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/setup-container/0.log" Nov 24 13:59:15 crc kubenswrapper[4756]: I1124 13:59:15.906263 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/setup-container/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.009705 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df531574-9350-4c19-bc09-b95744b731d0/rabbitmq/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.032477 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pmzj6_815b1dea-8fed-47a0-bb79-5eb5bb428c34/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.178568 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-42v8v_2b3ef56e-99e5-44c6-8a14-b49385bf3144/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.225674 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x5rlm_1f6dbb8f-7ae0-4132-ac08-12d04b55bb90/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.459317 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-564fv_16cccf4b-aeec-4529-9f5f-547e0df302e1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.562861 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-c6sz4_8b1a9d2c-d952-4c28-9b75-11ce1ec4f5a1/ssh-known-hosts-edpm-deployment/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.742273 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7646996fbc-r65ms_cfa5b0a5-395b-463e-aeb1-21b5cca10b22/proxy-server/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.958545 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4x5lq_6e1e7fc3-fb76-4de7-8a1a-7b93e50faf07/swift-ring-rebalance/0.log" Nov 24 13:59:16 crc kubenswrapper[4756]: I1124 13:59:16.972207 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7646996fbc-r65ms_cfa5b0a5-395b-463e-aeb1-21b5cca10b22/proxy-httpd/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.086418 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-auditor/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.207287 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-reaper/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.253286 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-replicator/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.257279 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/account-server/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.305320 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-auditor/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.449894 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-server/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.464500 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-updater/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.492420 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/container-replicator/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.564224 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-auditor/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.670742 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-expirer/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.700339 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-replicator/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.727318 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-server/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.809126 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/object-updater/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.865605 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/rsync/0.log" Nov 24 13:59:17 crc kubenswrapper[4756]: I1124 13:59:17.912229 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9cf650c1-2692-4b3d-89c5-5e3e0178e213/swift-recon-cron/0.log" Nov 24 13:59:18 crc kubenswrapper[4756]: I1124 13:59:18.059586 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bd8qw_24a1eee7-f667-475d-9b05-0b9f49a5619c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:18 crc kubenswrapper[4756]: I1124 13:59:18.160415 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_931a5dda-ad1f-4595-a5b8-3b1820afb648/tempest-tests-tempest-tests-runner/0.log" Nov 24 13:59:18 crc kubenswrapper[4756]: I1124 13:59:18.291671 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_11285260-11ac-42da-b521-1be38199040e/test-operator-logs-container/0.log" Nov 24 13:59:18 crc kubenswrapper[4756]: I1124 13:59:18.395950 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lzhf4_2112cb18-ecf9-43ff-b22c-37044d2b64e2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 13:59:19 crc kubenswrapper[4756]: I1124 13:59:19.062431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_88bd3f9d-e4a1-4cd0-afe2-e03876ec5c2d/watcher-applier/0.log" Nov 24 13:59:20 crc kubenswrapper[4756]: I1124 13:59:20.488421 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c84e5d83-17b7-47c7-9952-9e6942940b2a/watcher-api-log/0.log" Nov 24 13:59:20 crc kubenswrapper[4756]: I1124 13:59:20.726520 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_6eff196d-2bdb-48c0-9c64-f8f0836f5450/watcher-decision-engine/0.log" Nov 24 13:59:22 crc kubenswrapper[4756]: I1124 13:59:22.393683 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_be09de3c-c143-4b11-98ca-45292b9b015c/memcached/0.log" Nov 24 13:59:23 crc kubenswrapper[4756]: I1124 13:59:23.243622 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c84e5d83-17b7-47c7-9952-9e6942940b2a/watcher-api/0.log" Nov 24 13:59:45 crc kubenswrapper[4756]: I1124 13:59:45.966431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.135531 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.139538 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.155574 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.353050 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/util/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.360284 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/pull/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.363946 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d962b36918e0f6e9567f73735791120662ad5eb3f5c4549d5d712910ekxx5q_688f1e7d-e519-4f20-acae-7b329d42da9b/extract/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.540548 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-s8jfx_81563dca-2369-4349-9881-b2031df19de0/kube-rbac-proxy/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.626012 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-5srrx_0aa7a2bc-482f-4ed4-820d-331ea6d971c7/kube-rbac-proxy/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.635170 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-s8jfx_81563dca-2369-4349-9881-b2031df19de0/manager/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.757958 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-5srrx_0aa7a2bc-482f-4ed4-820d-331ea6d971c7/manager/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.822209 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-djhp4_3bc7fab7-280b-4964-a1f0-51f0b59438ed/manager/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.861339 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-djhp4_3bc7fab7-280b-4964-a1f0-51f0b59438ed/kube-rbac-proxy/0.log" Nov 24 13:59:46 crc kubenswrapper[4756]: I1124 13:59:46.981399 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-w22c4_91ae544e-de6e-44e8-9119-eae33586fe56/kube-rbac-proxy/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.119716 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-w22c4_91ae544e-de6e-44e8-9119-eae33586fe56/manager/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.160103 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-p9zgm_e2224700-f8c7-4380-95c5-537e168c7e99/kube-rbac-proxy/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.217150 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-p9zgm_e2224700-f8c7-4380-95c5-537e168c7e99/manager/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.382168 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-sh48c_99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd/kube-rbac-proxy/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.425577 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-sh48c_99dfd9fb-d4ea-4f2a-bbd5-670d1a75c7fd/manager/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.561092 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-hbs6w_8d4269ad-a2ff-47be-bade-792bbf616cf2/kube-rbac-proxy/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.677960 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kqtc5_4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e/kube-rbac-proxy/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.747571 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-kqtc5_4ce68fc2-b4c1-4d94-a65d-bb7de0530e1e/manager/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.826452 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-hbs6w_8d4269ad-a2ff-47be-bade-792bbf616cf2/manager/0.log" Nov 24 13:59:47 crc kubenswrapper[4756]: I1124 13:59:47.912377 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-s2rwz_09cf908e-b30f-47ea-a4d1-2e50a192289f/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.027668 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-s2rwz_09cf908e-b30f-47ea-a4d1-2e50a192289f/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.113180 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9q9zc_991621b1-366e-4d35-b1b7-6380e506ea08/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.121012 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-9q9zc_991621b1-366e-4d35-b1b7-6380e506ea08/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.202319 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_274dfe9d-6821-481f-a605-bf8fbf101f89/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.318981 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-jgtkf_274dfe9d-6821-481f-a605-bf8fbf101f89/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.368517 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-vwvff_8465956b-6245-447e-adcd-7ba8367ca117/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.441626 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-vwvff_8465956b-6245-447e-adcd-7ba8367ca117/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.508640 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-xjntp_bfc7de9e-743d-4492-979c-7043fb8b41d1/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.611343 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-xjntp_bfc7de9e-743d-4492-979c-7043fb8b41d1/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.708807 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-m4r6n_f874e9c8-d248-46c4-a1f2-8912827db14f/manager/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.773429 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-m4r6n_f874e9c8-d248-46c4-a1f2-8912827db14f/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.850061 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb_94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f/kube-rbac-proxy/0.log" Nov 24 13:59:48 crc kubenswrapper[4756]: I1124 13:59:48.884223 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-mqtkb_94f6fb8a-7d3b-4e68-bfe8-c62edc439c4f/manager/0.log" Nov 24 13:59:49 crc kubenswrapper[4756]: I1124 13:59:49.295890 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-666974d685-8lth8_0e481139-e850-4597-b98b-0a2aa8b1add9/operator/0.log" Nov 24 13:59:49 crc kubenswrapper[4756]: I1124 13:59:49.321516 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7jgvj_bc62b86b-8abe-4660-b5dd-e80f36962d0a/registry-server/0.log" Nov 24 13:59:49 crc kubenswrapper[4756]: I1124 13:59:49.859367 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-l9fh4_46c6804b-e74a-42d0-bc4e-2ffa7a5fa491/kube-rbac-proxy/0.log" Nov 24 13:59:49 crc kubenswrapper[4756]: I1124 13:59:49.963031 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-l9fh4_46c6804b-e74a-42d0-bc4e-2ffa7a5fa491/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.057906 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dkjtg_248f663c-2ddc-487f-a33c-9d7b9bad23be/kube-rbac-proxy/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.125513 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-dkjtg_248f663c-2ddc-487f-a33c-9d7b9bad23be/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.197133 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57bd844978-vgphd_fc8be713-d12e-4289-adbd-a3aee9ebf603/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.259617 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kvg52_0624f295-ba46-4e28-9f0f-356cfbe6ecbc/operator/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.309291 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rb4l5_4c0ece30-ae1b-4706-861c-2ee51f7332d7/kube-rbac-proxy/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.445122 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-rb4l5_4c0ece30-ae1b-4706-861c-2ee51f7332d7/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.445587 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-spm8x_7480249a-d35a-4768-b5cc-daebd6f82c9b/kube-rbac-proxy/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.659032 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-5mpgm_2f99acab-4016-43dc-ab21-6d0c920def14/kube-rbac-proxy/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.706342 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-5mpgm_2f99acab-4016-43dc-ab21-6d0c920def14/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.720093 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-spm8x_7480249a-d35a-4768-b5cc-daebd6f82c9b/manager/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.789051 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7445f8dd59-46b2l_eb1f334a-14e2-4f63-8168-e5db902d8e70/kube-rbac-proxy/0.log" Nov 24 13:59:50 crc kubenswrapper[4756]: I1124 13:59:50.868084 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7445f8dd59-46b2l_eb1f334a-14e2-4f63-8168-e5db902d8e70/manager/0.log" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.174037 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4"] Nov 24 14:00:00 crc kubenswrapper[4756]: E1124 14:00:00.175799 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a398f678-abd5-4484-bcfd-e601cc8c572a" containerName="container-00" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.175831 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a398f678-abd5-4484-bcfd-e601cc8c572a" containerName="container-00" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.176176 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a398f678-abd5-4484-bcfd-e601cc8c572a" containerName="container-00" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.177360 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.209590 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.209881 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.255540 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4"] Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.315273 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.315392 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4g8\" (UniqueName: \"kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.315479 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.417733 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.417823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4g8\" (UniqueName: \"kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.417904 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.418708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.424038 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.433477 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4g8\" (UniqueName: \"kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8\") pod \"collect-profiles-29399880-vcfg4\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:00 crc kubenswrapper[4756]: I1124 14:00:00.539991 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:01 crc kubenswrapper[4756]: I1124 14:00:01.137955 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4"] Nov 24 14:00:01 crc kubenswrapper[4756]: I1124 14:00:01.777532 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" event={"ID":"f1d9a07a-d70c-4440-8e95-ced1376f0971","Type":"ContainerStarted","Data":"2766fb77f156198b7a5b4ddd17c808ddb238cca00d9172ed37edcefc2e2d379d"} Nov 24 14:00:01 crc kubenswrapper[4756]: I1124 14:00:01.777583 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" event={"ID":"f1d9a07a-d70c-4440-8e95-ced1376f0971","Type":"ContainerStarted","Data":"6a8b21629111ab732f29e5c1971bd39fa4d39868878517c2d85f163f8560763f"} Nov 24 14:00:01 crc kubenswrapper[4756]: I1124 14:00:01.796378 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" podStartSLOduration=1.796352215 podStartE2EDuration="1.796352215s" podCreationTimestamp="2025-11-24 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:01.794595727 +0000 UTC m=+5534.152109889" watchObservedRunningTime="2025-11-24 14:00:01.796352215 +0000 UTC m=+5534.153866357" Nov 24 14:00:02 crc kubenswrapper[4756]: I1124 14:00:02.788668 4756 generic.go:334] "Generic (PLEG): container finished" podID="f1d9a07a-d70c-4440-8e95-ced1376f0971" containerID="2766fb77f156198b7a5b4ddd17c808ddb238cca00d9172ed37edcefc2e2d379d" exitCode=0 Nov 24 14:00:02 crc kubenswrapper[4756]: I1124 14:00:02.788730 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" event={"ID":"f1d9a07a-d70c-4440-8e95-ced1376f0971","Type":"ContainerDied","Data":"2766fb77f156198b7a5b4ddd17c808ddb238cca00d9172ed37edcefc2e2d379d"} Nov 24 14:00:03 crc kubenswrapper[4756]: I1124 14:00:03.479356 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:00:03 crc kubenswrapper[4756]: I1124 14:00:03.479447 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.185568 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.295702 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q4g8\" (UniqueName: \"kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8\") pod \"f1d9a07a-d70c-4440-8e95-ced1376f0971\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.295872 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume\") pod \"f1d9a07a-d70c-4440-8e95-ced1376f0971\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.295938 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume\") pod \"f1d9a07a-d70c-4440-8e95-ced1376f0971\" (UID: \"f1d9a07a-d70c-4440-8e95-ced1376f0971\") " Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.297302 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1d9a07a-d70c-4440-8e95-ced1376f0971" (UID: "f1d9a07a-d70c-4440-8e95-ced1376f0971"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.305404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1d9a07a-d70c-4440-8e95-ced1376f0971" (UID: "f1d9a07a-d70c-4440-8e95-ced1376f0971"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.305454 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8" (OuterVolumeSpecName: "kube-api-access-2q4g8") pod "f1d9a07a-d70c-4440-8e95-ced1376f0971" (UID: "f1d9a07a-d70c-4440-8e95-ced1376f0971"). InnerVolumeSpecName "kube-api-access-2q4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.397984 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1d9a07a-d70c-4440-8e95-ced1376f0971-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.398029 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1d9a07a-d70c-4440-8e95-ced1376f0971-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.398041 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q4g8\" (UniqueName: \"kubernetes.io/projected/f1d9a07a-d70c-4440-8e95-ced1376f0971-kube-api-access-2q4g8\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.819811 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" event={"ID":"f1d9a07a-d70c-4440-8e95-ced1376f0971","Type":"ContainerDied","Data":"6a8b21629111ab732f29e5c1971bd39fa4d39868878517c2d85f163f8560763f"} Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.819859 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8b21629111ab732f29e5c1971bd39fa4d39868878517c2d85f163f8560763f" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.819868 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399880-vcfg4" Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.888868 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m"] Nov 24 14:00:04 crc kubenswrapper[4756]: I1124 14:00:04.904283 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399835-hkz8m"] Nov 24 14:00:06 crc kubenswrapper[4756]: I1124 14:00:06.485923 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6d1101-020e-478a-ad65-5a6b9da5e271" path="/var/lib/kubelet/pods/6b6d1101-020e-478a-ad65-5a6b9da5e271/volumes" Nov 24 14:00:09 crc kubenswrapper[4756]: I1124 14:00:09.261520 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-v5ndr_37bf3224-33a8-45ab-93fc-05a44ed3f535/control-plane-machine-set-operator/0.log" Nov 24 14:00:09 crc kubenswrapper[4756]: I1124 14:00:09.453584 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xhcw8_a0672b72-0b66-434e-8930-4297ea0f3f98/kube-rbac-proxy/0.log" Nov 24 14:00:09 crc kubenswrapper[4756]: I1124 14:00:09.476809 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xhcw8_a0672b72-0b66-434e-8930-4297ea0f3f98/machine-api-operator/0.log" Nov 24 14:00:21 crc kubenswrapper[4756]: I1124 14:00:21.836544 4756 scope.go:117] "RemoveContainer" containerID="22ca88d93d1871afd559fa583ef534b987418871d47c0d7bf76683b04d7b61a7" Nov 24 14:00:22 crc kubenswrapper[4756]: I1124 14:00:22.435218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-v72fq_50967785-12d1-45d3-b9e1-03c7dcb00af4/cert-manager-controller/0.log" Nov 24 14:00:22 crc kubenswrapper[4756]: I1124 14:00:22.638412 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-4xzdf_f2d8ba73-901c-4245-bb1f-37c63a3b7232/cert-manager-webhook/0.log" Nov 24 14:00:22 crc kubenswrapper[4756]: I1124 14:00:22.677805 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-4dcdb_029ff7e6-28c1-4bf7-9b5b-575230d5ed04/cert-manager-cainjector/0.log" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.869559 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:24 crc kubenswrapper[4756]: E1124 14:00:24.871731 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d9a07a-d70c-4440-8e95-ced1376f0971" containerName="collect-profiles" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.871841 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d9a07a-d70c-4440-8e95-ced1376f0971" containerName="collect-profiles" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.872237 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d9a07a-d70c-4440-8e95-ced1376f0971" containerName="collect-profiles" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.874139 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.885741 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.990214 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5pl\" (UniqueName: \"kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.990497 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:24 crc kubenswrapper[4756]: I1124 14:00:24.990560 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.092453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.092524 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.092648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5pl\" (UniqueName: \"kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.093072 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.093081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.117263 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5pl\" (UniqueName: \"kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl\") pod \"redhat-marketplace-7xjsf\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.205297 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:25 crc kubenswrapper[4756]: I1124 14:00:25.755134 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:26 crc kubenswrapper[4756]: I1124 14:00:26.077229 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerID="addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457" exitCode=0 Nov 24 14:00:26 crc kubenswrapper[4756]: I1124 14:00:26.077330 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerDied","Data":"addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457"} Nov 24 14:00:26 crc kubenswrapper[4756]: I1124 14:00:26.077595 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerStarted","Data":"7388e18ab3765f8770b1f4e72c091c69d52ba13cda3e64f16bea7ed2c0c8b578"} Nov 24 14:00:27 crc kubenswrapper[4756]: I1124 14:00:27.091628 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerStarted","Data":"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f"} Nov 24 14:00:28 crc kubenswrapper[4756]: I1124 14:00:28.106538 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerID="88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f" exitCode=0 Nov 24 14:00:28 crc kubenswrapper[4756]: I1124 14:00:28.106638 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerDied","Data":"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f"} Nov 24 14:00:29 crc kubenswrapper[4756]: I1124 14:00:29.149019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerStarted","Data":"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f"} Nov 24 14:00:29 crc kubenswrapper[4756]: I1124 14:00:29.174620 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xjsf" podStartSLOduration=2.536822646 podStartE2EDuration="5.174602371s" podCreationTimestamp="2025-11-24 14:00:24 +0000 UTC" firstStartedPulling="2025-11-24 14:00:26.081023695 +0000 UTC m=+5558.438537837" lastFinishedPulling="2025-11-24 14:00:28.71880342 +0000 UTC m=+5561.076317562" observedRunningTime="2025-11-24 14:00:29.165793614 +0000 UTC m=+5561.523307756" watchObservedRunningTime="2025-11-24 14:00:29.174602371 +0000 UTC m=+5561.532116513" Nov 24 14:00:33 crc kubenswrapper[4756]: I1124 14:00:33.478895 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:00:33 crc kubenswrapper[4756]: I1124 14:00:33.479559 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.205852 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.206957 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.265304 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.528490 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-xbnb6_c0c918b6-55ce-4aa8-b777-1b442a5c0ea9/nmstate-console-plugin/0.log" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.692763 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hg5kh_bb1daace-bab0-41df-a60c-cc01cd7013ea/nmstate-handler/0.log" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.757801 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4c8wf_6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1/kube-rbac-proxy/0.log" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.849729 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-9gdrp_7d382e59-3e6d-496e-b637-3ef4848ddc24/nmstate-operator/0.log" Nov 24 14:00:35 crc kubenswrapper[4756]: I1124 14:00:35.854924 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-4c8wf_6e7aa8ba-7a40-4d85-8b19-86dfe7e87eb1/nmstate-metrics/0.log" Nov 24 14:00:36 crc kubenswrapper[4756]: I1124 14:00:36.050813 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-p6r5k_28634c25-efc4-43b6-92c5-0bc6b20aa941/nmstate-webhook/0.log" Nov 24 14:00:36 crc kubenswrapper[4756]: I1124 14:00:36.282698 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:36 crc kubenswrapper[4756]: I1124 14:00:36.337223 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.251168 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xjsf" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="registry-server" containerID="cri-o://8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f" gracePeriod=2 Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.730061 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.795073 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5pl\" (UniqueName: \"kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl\") pod \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.795132 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content\") pod \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.795214 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities\") pod \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\" (UID: \"2d325a7b-9e63-4a5b-90e7-af63a8a865c6\") " Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.796480 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities" (OuterVolumeSpecName: "utilities") pod "2d325a7b-9e63-4a5b-90e7-af63a8a865c6" (UID: "2d325a7b-9e63-4a5b-90e7-af63a8a865c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.803347 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl" (OuterVolumeSpecName: "kube-api-access-8q5pl") pod "2d325a7b-9e63-4a5b-90e7-af63a8a865c6" (UID: "2d325a7b-9e63-4a5b-90e7-af63a8a865c6"). InnerVolumeSpecName "kube-api-access-8q5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.817870 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d325a7b-9e63-4a5b-90e7-af63a8a865c6" (UID: "2d325a7b-9e63-4a5b-90e7-af63a8a865c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.897802 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5pl\" (UniqueName: \"kubernetes.io/projected/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-kube-api-access-8q5pl\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.897841 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:38 crc kubenswrapper[4756]: I1124 14:00:38.897857 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d325a7b-9e63-4a5b-90e7-af63a8a865c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.266028 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerID="8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f" exitCode=0 Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.266315 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerDied","Data":"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f"} Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.266617 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xjsf" event={"ID":"2d325a7b-9e63-4a5b-90e7-af63a8a865c6","Type":"ContainerDied","Data":"7388e18ab3765f8770b1f4e72c091c69d52ba13cda3e64f16bea7ed2c0c8b578"} Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.266656 4756 scope.go:117] "RemoveContainer" containerID="8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.266413 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xjsf" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.293071 4756 scope.go:117] "RemoveContainer" containerID="88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.317535 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.323877 4756 scope.go:117] "RemoveContainer" containerID="addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.328507 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xjsf"] Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.375127 4756 scope.go:117] "RemoveContainer" containerID="8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f" Nov 24 14:00:39 crc kubenswrapper[4756]: E1124 14:00:39.375778 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f\": container with ID starting with 8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f not found: ID does not exist" containerID="8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.375826 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f"} err="failed to get container status \"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f\": rpc error: code = NotFound desc = could not find container \"8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f\": container with ID starting with 8644e9ab2d83a0798e0f2891f912c4f209be611c07273708dda311bf10f43f8f not found: ID does not exist" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.375889 4756 scope.go:117] "RemoveContainer" containerID="88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f" Nov 24 14:00:39 crc kubenswrapper[4756]: E1124 14:00:39.376255 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f\": container with ID starting with 88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f not found: ID does not exist" containerID="88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.376287 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f"} err="failed to get container status \"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f\": rpc error: code = NotFound desc = could not find container \"88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f\": container with ID starting with 88848dfe438a683c147328b32b5fc8580af00f3da02ffe14018076417c23e18f not found: ID does not exist" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.376307 4756 scope.go:117] "RemoveContainer" containerID="addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457" Nov 24 14:00:39 crc kubenswrapper[4756]: E1124 14:00:39.376588 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457\": container with ID starting with addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457 not found: ID does not exist" containerID="addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457" Nov 24 14:00:39 crc kubenswrapper[4756]: I1124 14:00:39.376617 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457"} err="failed to get container status \"addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457\": rpc error: code = NotFound desc = could not find container \"addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457\": container with ID starting with addc267af98033da47d72980c8e4fe2dd73e2aaaaac8aacc6e604075c355c457 not found: ID does not exist" Nov 24 14:00:40 crc kubenswrapper[4756]: I1124 14:00:40.491368 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" path="/var/lib/kubelet/pods/2d325a7b-9e63-4a5b-90e7-af63a8a865c6/volumes" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.244670 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5k8pq_e090cbac-2c8e-44a1-9df3-592d95aa0e66/kube-rbac-proxy/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.315953 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5k8pq_e090cbac-2c8e-44a1-9df3-592d95aa0e66/controller/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.450090 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.611315 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.629879 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.640763 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.654096 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.819855 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.820595 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.858870 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 14:00:50 crc kubenswrapper[4756]: I1124 14:00:50.896808 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.033278 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-frr-files/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.044185 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-metrics/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.069595 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/cp-reloader/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.109297 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/controller/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.215921 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/frr-metrics/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.270529 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/kube-rbac-proxy/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.317046 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/kube-rbac-proxy-frr/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.428685 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/reloader/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.531891 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-2db82_7b16b70e-daf1-4950-994b-b0e166b95215/frr-k8s-webhook-server/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.715284 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-585778954f-lwtdb_fe690ebd-7c38-400c-bd3e-ddec63e361ea/manager/0.log" Nov 24 14:00:51 crc kubenswrapper[4756]: I1124 14:00:51.876804 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bbc7fc897-wd2m8_df074e39-b784-4804-afd8-3625ad3fecd0/webhook-server/0.log" Nov 24 14:00:52 crc kubenswrapper[4756]: I1124 14:00:52.039586 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r8zzq_54a65742-0318-409a-8a0e-e5c01abe2945/kube-rbac-proxy/0.log" Nov 24 14:00:52 crc kubenswrapper[4756]: I1124 14:00:52.568431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r8zzq_54a65742-0318-409a-8a0e-e5c01abe2945/speaker/0.log" Nov 24 14:00:52 crc kubenswrapper[4756]: I1124 14:00:52.835757 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v5pd2_30798bdf-4846-4408-82f4-b22ba7ec7f84/frr/0.log" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.147650 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399881-8gqbx"] Nov 24 14:01:00 crc kubenswrapper[4756]: E1124 14:01:00.148528 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="registry-server" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.148541 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="registry-server" Nov 24 14:01:00 crc kubenswrapper[4756]: E1124 14:01:00.148572 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="extract-utilities" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.148578 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="extract-utilities" Nov 24 14:01:00 crc kubenswrapper[4756]: E1124 14:01:00.148588 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="extract-content" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.148595 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="extract-content" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.148772 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d325a7b-9e63-4a5b-90e7-af63a8a865c6" containerName="registry-server" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.149541 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.159609 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399881-8gqbx"] Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.214732 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.214902 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.215000 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvtd\" (UniqueName: \"kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.215042 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.317294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.317726 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.317840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvtd\" (UniqueName: \"kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.317886 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.323520 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.324740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.325580 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.337790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvtd\" (UniqueName: \"kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd\") pod \"keystone-cron-29399881-8gqbx\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.480312 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:00 crc kubenswrapper[4756]: I1124 14:01:00.985828 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399881-8gqbx"] Nov 24 14:01:01 crc kubenswrapper[4756]: I1124 14:01:01.483110 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399881-8gqbx" event={"ID":"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1","Type":"ContainerStarted","Data":"c8db3863af35af4ee4590257fdba955114dbff2e1f7e0c72506b635434968bdc"} Nov 24 14:01:01 crc kubenswrapper[4756]: I1124 14:01:01.483495 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399881-8gqbx" event={"ID":"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1","Type":"ContainerStarted","Data":"1d58552c1028e758e70b5d52829e23315793d93a07024dd7a37f26990adcf24d"} Nov 24 14:01:03 crc kubenswrapper[4756]: I1124 14:01:03.478983 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:01:03 crc kubenswrapper[4756]: I1124 14:01:03.480955 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:01:03 crc kubenswrapper[4756]: I1124 14:01:03.481281 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 14:01:03 crc kubenswrapper[4756]: I1124 14:01:03.482303 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 14:01:03 crc kubenswrapper[4756]: I1124 14:01:03.482634 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018" gracePeriod=600 Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.515642 4756 generic.go:334] "Generic (PLEG): container finished" podID="6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" containerID="c8db3863af35af4ee4590257fdba955114dbff2e1f7e0c72506b635434968bdc" exitCode=0 Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.515749 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399881-8gqbx" event={"ID":"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1","Type":"ContainerDied","Data":"c8db3863af35af4ee4590257fdba955114dbff2e1f7e0c72506b635434968bdc"} Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.520146 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018" exitCode=0 Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.520188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018"} Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.520240 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerStarted","Data":"20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0"} Nov 24 14:01:04 crc kubenswrapper[4756]: I1124 14:01:04.520268 4756 scope.go:117] "RemoveContainer" containerID="685ddb34009bfd082e834e585090e9f24b35e3008867e7797ea81b05687b7626" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.337871 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.548646 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.593049 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.594035 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.801832 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/pull/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.806268 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/extract/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.850290 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772emvcbb_6430b3f4-a359-4bb6-abd8-a0a4e39183b8/util/0.log" Nov 24 14:01:05 crc kubenswrapper[4756]: I1124 14:01:05.952674 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.037375 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.048815 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvvtd\" (UniqueName: \"kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd\") pod \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.048931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data\") pod \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.048969 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle\") pod \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.049049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys\") pod \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\" (UID: \"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1\") " Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.055298 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" (UID: "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.073414 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd" (OuterVolumeSpecName: "kube-api-access-wvvtd") pod "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" (UID: "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1"). InnerVolumeSpecName "kube-api-access-wvvtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.103925 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" (UID: "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.118237 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data" (OuterVolumeSpecName: "config-data") pod "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" (UID: "6dca1e4e-747a-48a7-a557-8e2ad7e27ca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.151337 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.151387 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvvtd\" (UniqueName: \"kubernetes.io/projected/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-kube-api-access-wvvtd\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.151413 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.151424 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dca1e4e-747a-48a7-a557-8e2ad7e27ca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.181329 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.224483 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.239459 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.352048 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/pull/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.399500 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/util/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.408650 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105qc62_851eb838-a360-48b4-a06e-85f114507ab6/extract/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.564113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399881-8gqbx" event={"ID":"6dca1e4e-747a-48a7-a557-8e2ad7e27ca1","Type":"ContainerDied","Data":"1d58552c1028e758e70b5d52829e23315793d93a07024dd7a37f26990adcf24d"} Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.564144 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399881-8gqbx" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.564149 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d58552c1028e758e70b5d52829e23315793d93a07024dd7a37f26990adcf24d" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.569353 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.736656 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.756778 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.783969 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.926760 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-utilities/0.log" Nov 24 14:01:06 crc kubenswrapper[4756]: I1124 14:01:06.945461 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/extract-content/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.169666 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.422345 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.422350 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.461100 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.635966 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-content/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.639866 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/extract-utilities/0.log" Nov 24 14:01:07 crc kubenswrapper[4756]: I1124 14:01:07.905274 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.084264 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l4qxn_cae50a20-6a68-4c81-a165-6eaeca6bcf3e/registry-server/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.126434 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kgv96_2d65918b-b65c-46af-a1da-adcf27b5ac69/registry-server/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.144131 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.152071 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.253005 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.422553 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/util/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.423955 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/extract/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.461204 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6chs75_ab5660ee-1372-4a12-9dbc-020b356597cd/pull/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.629130 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dzrdb_7cc1cad9-8e95-4b2c-bfb2-dd376178315f/marketplace-operator/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.682329 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.845727 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.854843 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 14:01:08 crc kubenswrapper[4756]: I1124 14:01:08.871199 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.038731 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-content/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.073662 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/extract-utilities/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.126307 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-utilities/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.305831 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmncc_010138d9-b91f-41a7-80a7-468667e43d51/registry-server/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.308773 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-content/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.321015 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-content/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.338541 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-utilities/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.496389 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-content/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.499658 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/extract-utilities/0.log" Nov 24 14:01:09 crc kubenswrapper[4756]: I1124 14:01:09.637809 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-842fx_8d4ff55b-5d27-4ecd-bc71-9a91650d0e60/registry-server/0.log" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.594484 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:21 crc kubenswrapper[4756]: E1124 14:01:21.595680 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" containerName="keystone-cron" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.595695 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" containerName="keystone-cron" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.595955 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dca1e4e-747a-48a7-a557-8e2ad7e27ca1" containerName="keystone-cron" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.597995 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.605921 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.650433 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.650587 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.651084 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kkn\" (UniqueName: \"kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.719954 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-gw4jk_0f42bf51-8a6c-4390-83a6-dbae6d26126a/prometheus-operator/0.log" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.755837 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kkn\" (UniqueName: \"kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.755924 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.755994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.756657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.756710 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.782286 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kkn\" (UniqueName: \"kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn\") pod \"certified-operators-tjd7f\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.930698 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:21 crc kubenswrapper[4756]: I1124 14:01:21.952983 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7954c5c5f5-j6z4j_d1a8e934-b419-4e57-9311-7c8a34745da9/prometheus-operator-admission-webhook/0.log" Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.264783 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7954c5c5f5-vcww8_7b44bcfa-0c82-4db2-b4e0-310a76be2b6f/prometheus-operator-admission-webhook/0.log" Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.425531 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-qmk8j_7ee15342-4efd-4e6b-8569-b54b26064eaf/operator/0.log" Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.461709 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.598980 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-p8ln7_0055f07d-1546-45ad-b576-87d016490055/perses-operator/0.log" Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.740963 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerID="1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173" exitCode=0 Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.741028 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerDied","Data":"1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173"} Nov 24 14:01:22 crc kubenswrapper[4756]: I1124 14:01:22.741079 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerStarted","Data":"a63c2708f05dcecb64c018e057e31d9b1fce9e8da220abebe926a50061cf8158"} Nov 24 14:01:24 crc kubenswrapper[4756]: I1124 14:01:24.759594 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerStarted","Data":"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d"} Nov 24 14:01:26 crc kubenswrapper[4756]: I1124 14:01:26.778714 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerID="59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d" exitCode=0 Nov 24 14:01:26 crc kubenswrapper[4756]: I1124 14:01:26.779247 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerDied","Data":"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d"} Nov 24 14:01:27 crc kubenswrapper[4756]: I1124 14:01:27.789892 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerStarted","Data":"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a"} Nov 24 14:01:31 crc kubenswrapper[4756]: I1124 14:01:31.931574 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:31 crc kubenswrapper[4756]: I1124 14:01:31.932227 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:32 crc kubenswrapper[4756]: I1124 14:01:32.987717 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tjd7f" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="registry-server" probeResult="failure" output=< Nov 24 14:01:32 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 14:01:32 crc kubenswrapper[4756]: > Nov 24 14:01:41 crc kubenswrapper[4756]: I1124 14:01:41.987112 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:42 crc kubenswrapper[4756]: I1124 14:01:42.008316 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjd7f" podStartSLOduration=16.566415345 podStartE2EDuration="21.008296163s" podCreationTimestamp="2025-11-24 14:01:21 +0000 UTC" firstStartedPulling="2025-11-24 14:01:22.742794443 +0000 UTC m=+5615.100308585" lastFinishedPulling="2025-11-24 14:01:27.184675221 +0000 UTC m=+5619.542189403" observedRunningTime="2025-11-24 14:01:27.816088485 +0000 UTC m=+5620.173602627" watchObservedRunningTime="2025-11-24 14:01:42.008296163 +0000 UTC m=+5634.365810305" Nov 24 14:01:42 crc kubenswrapper[4756]: I1124 14:01:42.051609 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:42 crc kubenswrapper[4756]: I1124 14:01:42.228695 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:43 crc kubenswrapper[4756]: I1124 14:01:43.946866 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjd7f" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="registry-server" containerID="cri-o://02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a" gracePeriod=2 Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.464641 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.611491 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content\") pod \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.611885 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kkn\" (UniqueName: \"kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn\") pod \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.612108 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities\") pod \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\" (UID: \"ef40f2cf-e0a7-4448-8409-fa73a2bea787\") " Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.613302 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities" (OuterVolumeSpecName: "utilities") pod "ef40f2cf-e0a7-4448-8409-fa73a2bea787" (UID: "ef40f2cf-e0a7-4448-8409-fa73a2bea787"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.617936 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn" (OuterVolumeSpecName: "kube-api-access-v4kkn") pod "ef40f2cf-e0a7-4448-8409-fa73a2bea787" (UID: "ef40f2cf-e0a7-4448-8409-fa73a2bea787"). InnerVolumeSpecName "kube-api-access-v4kkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.659786 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef40f2cf-e0a7-4448-8409-fa73a2bea787" (UID: "ef40f2cf-e0a7-4448-8409-fa73a2bea787"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.714114 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.714147 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kkn\" (UniqueName: \"kubernetes.io/projected/ef40f2cf-e0a7-4448-8409-fa73a2bea787-kube-api-access-v4kkn\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.714181 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef40f2cf-e0a7-4448-8409-fa73a2bea787-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.958445 4756 generic.go:334] "Generic (PLEG): container finished" podID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerID="02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a" exitCode=0 Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.958482 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerDied","Data":"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a"} Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.958505 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjd7f" event={"ID":"ef40f2cf-e0a7-4448-8409-fa73a2bea787","Type":"ContainerDied","Data":"a63c2708f05dcecb64c018e057e31d9b1fce9e8da220abebe926a50061cf8158"} Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.958524 4756 scope.go:117] "RemoveContainer" containerID="02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.958633 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjd7f" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.997489 4756 scope.go:117] "RemoveContainer" containerID="59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d" Nov 24 14:01:44 crc kubenswrapper[4756]: I1124 14:01:44.997710 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.006218 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjd7f"] Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.022835 4756 scope.go:117] "RemoveContainer" containerID="1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.062040 4756 scope.go:117] "RemoveContainer" containerID="02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a" Nov 24 14:01:45 crc kubenswrapper[4756]: E1124 14:01:45.062523 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a\": container with ID starting with 02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a not found: ID does not exist" containerID="02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.062579 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a"} err="failed to get container status \"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a\": rpc error: code = NotFound desc = could not find container \"02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a\": container with ID starting with 02fbef997602f026c959031bc7c9f1d4f0d76d331cd2cbc32081cccb813df81a not found: ID does not exist" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.062683 4756 scope.go:117] "RemoveContainer" containerID="59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d" Nov 24 14:01:45 crc kubenswrapper[4756]: E1124 14:01:45.063124 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d\": container with ID starting with 59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d not found: ID does not exist" containerID="59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.063219 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d"} err="failed to get container status \"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d\": rpc error: code = NotFound desc = could not find container \"59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d\": container with ID starting with 59a9a53f5c065d3d7f1b0465acb9c647e4442130f815b75b3f06506f6ee4840d not found: ID does not exist" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.063251 4756 scope.go:117] "RemoveContainer" containerID="1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173" Nov 24 14:01:45 crc kubenswrapper[4756]: E1124 14:01:45.063528 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173\": container with ID starting with 1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173 not found: ID does not exist" containerID="1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173" Nov 24 14:01:45 crc kubenswrapper[4756]: I1124 14:01:45.063562 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173"} err="failed to get container status \"1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173\": rpc error: code = NotFound desc = could not find container \"1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173\": container with ID starting with 1ade752fd1cbc71719778e1744dc3d658a93f2dae0da0039cf054757557bc173 not found: ID does not exist" Nov 24 14:01:46 crc kubenswrapper[4756]: I1124 14:01:46.488771 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" path="/var/lib/kubelet/pods/ef40f2cf-e0a7-4448-8409-fa73a2bea787/volumes" Nov 24 14:03:03 crc kubenswrapper[4756]: I1124 14:03:03.479306 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:03:03 crc kubenswrapper[4756]: I1124 14:03:03.480026 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:03:10 crc kubenswrapper[4756]: I1124 14:03:10.939699 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerID="dd54f9e5746011226c38b433e1f9451131e79c02e02b2992f793d264976bb73a" exitCode=0 Nov 24 14:03:10 crc kubenswrapper[4756]: I1124 14:03:10.939839 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8fnz6/must-gather-tptz7" event={"ID":"0f3b518a-c3b0-47d5-84cc-8db7970134cc","Type":"ContainerDied","Data":"dd54f9e5746011226c38b433e1f9451131e79c02e02b2992f793d264976bb73a"} Nov 24 14:03:10 crc kubenswrapper[4756]: I1124 14:03:10.941080 4756 scope.go:117] "RemoveContainer" containerID="dd54f9e5746011226c38b433e1f9451131e79c02e02b2992f793d264976bb73a" Nov 24 14:03:11 crc kubenswrapper[4756]: I1124 14:03:11.613714 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8fnz6_must-gather-tptz7_0f3b518a-c3b0-47d5-84cc-8db7970134cc/gather/0.log" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.101488 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:18 crc kubenswrapper[4756]: E1124 14:03:18.103104 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="extract-utilities" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.103127 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="extract-utilities" Nov 24 14:03:18 crc kubenswrapper[4756]: E1124 14:03:18.103185 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="registry-server" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.103198 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="registry-server" Nov 24 14:03:18 crc kubenswrapper[4756]: E1124 14:03:18.103267 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="extract-content" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.103279 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="extract-content" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.103726 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef40f2cf-e0a7-4448-8409-fa73a2bea787" containerName="registry-server" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.106499 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.137220 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.267429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.267588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmhw\" (UniqueName: \"kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.267705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.369736 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmhw\" (UniqueName: \"kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.369806 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.369936 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.370592 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.370871 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.389944 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmhw\" (UniqueName: \"kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw\") pod \"community-operators-hmgsc\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:18 crc kubenswrapper[4756]: I1124 14:03:18.441803 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:19 crc kubenswrapper[4756]: I1124 14:03:19.146663 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:20 crc kubenswrapper[4756]: I1124 14:03:20.050938 4756 generic.go:334] "Generic (PLEG): container finished" podID="d74bb400-382c-4844-9ebc-8fbc21724247" containerID="e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a" exitCode=0 Nov 24 14:03:20 crc kubenswrapper[4756]: I1124 14:03:20.051055 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerDied","Data":"e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a"} Nov 24 14:03:20 crc kubenswrapper[4756]: I1124 14:03:20.051512 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerStarted","Data":"af94c436364cb5ff0d33cb9f5cbbda88bfb721c11934a54873a625b42591de2f"} Nov 24 14:03:20 crc kubenswrapper[4756]: I1124 14:03:20.055376 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 14:03:21 crc kubenswrapper[4756]: I1124 14:03:21.075801 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerStarted","Data":"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923"} Nov 24 14:03:23 crc kubenswrapper[4756]: I1124 14:03:23.100026 4756 generic.go:334] "Generic (PLEG): container finished" podID="d74bb400-382c-4844-9ebc-8fbc21724247" containerID="35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923" exitCode=0 Nov 24 14:03:23 crc kubenswrapper[4756]: I1124 14:03:23.100107 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerDied","Data":"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923"} Nov 24 14:03:23 crc kubenswrapper[4756]: I1124 14:03:23.762315 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8fnz6/must-gather-tptz7"] Nov 24 14:03:23 crc kubenswrapper[4756]: I1124 14:03:23.762810 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8fnz6/must-gather-tptz7" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="copy" containerID="cri-o://9bf4eb765a6e218d342c2f146f1d33d9958275e04c24d24eb4a25dd1c585a33c" gracePeriod=2 Nov 24 14:03:23 crc kubenswrapper[4756]: I1124 14:03:23.780915 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8fnz6/must-gather-tptz7"] Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.130459 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8fnz6_must-gather-tptz7_0f3b518a-c3b0-47d5-84cc-8db7970134cc/copy/0.log" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.131209 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerID="9bf4eb765a6e218d342c2f146f1d33d9958275e04c24d24eb4a25dd1c585a33c" exitCode=143 Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.149251 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerStarted","Data":"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23"} Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.200409 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hmgsc" podStartSLOduration=2.65170259 podStartE2EDuration="6.200387628s" podCreationTimestamp="2025-11-24 14:03:18 +0000 UTC" firstStartedPulling="2025-11-24 14:03:20.055023348 +0000 UTC m=+5732.412537500" lastFinishedPulling="2025-11-24 14:03:23.603708396 +0000 UTC m=+5735.961222538" observedRunningTime="2025-11-24 14:03:24.198919039 +0000 UTC m=+5736.556433181" watchObservedRunningTime="2025-11-24 14:03:24.200387628 +0000 UTC m=+5736.557901770" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.287360 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8fnz6_must-gather-tptz7_0f3b518a-c3b0-47d5-84cc-8db7970134cc/copy/0.log" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.287750 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.403316 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw8hx\" (UniqueName: \"kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx\") pod \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.403377 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output\") pod \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\" (UID: \"0f3b518a-c3b0-47d5-84cc-8db7970134cc\") " Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.409080 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx" (OuterVolumeSpecName: "kube-api-access-mw8hx") pod "0f3b518a-c3b0-47d5-84cc-8db7970134cc" (UID: "0f3b518a-c3b0-47d5-84cc-8db7970134cc"). InnerVolumeSpecName "kube-api-access-mw8hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.505432 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw8hx\" (UniqueName: \"kubernetes.io/projected/0f3b518a-c3b0-47d5-84cc-8db7970134cc-kube-api-access-mw8hx\") on node \"crc\" DevicePath \"\"" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.584289 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0f3b518a-c3b0-47d5-84cc-8db7970134cc" (UID: "0f3b518a-c3b0-47d5-84cc-8db7970134cc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:03:24 crc kubenswrapper[4756]: I1124 14:03:24.607243 4756 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f3b518a-c3b0-47d5-84cc-8db7970134cc-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 14:03:25 crc kubenswrapper[4756]: I1124 14:03:25.158633 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8fnz6_must-gather-tptz7_0f3b518a-c3b0-47d5-84cc-8db7970134cc/copy/0.log" Nov 24 14:03:25 crc kubenswrapper[4756]: I1124 14:03:25.159103 4756 scope.go:117] "RemoveContainer" containerID="9bf4eb765a6e218d342c2f146f1d33d9958275e04c24d24eb4a25dd1c585a33c" Nov 24 14:03:25 crc kubenswrapper[4756]: I1124 14:03:25.159320 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8fnz6/must-gather-tptz7" Nov 24 14:03:25 crc kubenswrapper[4756]: I1124 14:03:25.183317 4756 scope.go:117] "RemoveContainer" containerID="dd54f9e5746011226c38b433e1f9451131e79c02e02b2992f793d264976bb73a" Nov 24 14:03:26 crc kubenswrapper[4756]: I1124 14:03:26.491886 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" path="/var/lib/kubelet/pods/0f3b518a-c3b0-47d5-84cc-8db7970134cc/volumes" Nov 24 14:03:28 crc kubenswrapper[4756]: I1124 14:03:28.442834 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:28 crc kubenswrapper[4756]: I1124 14:03:28.444369 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:28 crc kubenswrapper[4756]: I1124 14:03:28.491617 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:29 crc kubenswrapper[4756]: I1124 14:03:29.298242 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:29 crc kubenswrapper[4756]: I1124 14:03:29.382690 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:31 crc kubenswrapper[4756]: I1124 14:03:31.248860 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hmgsc" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="registry-server" containerID="cri-o://9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23" gracePeriod=2 Nov 24 14:03:31 crc kubenswrapper[4756]: I1124 14:03:31.911560 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.064397 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content\") pod \"d74bb400-382c-4844-9ebc-8fbc21724247\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.064461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities\") pod \"d74bb400-382c-4844-9ebc-8fbc21724247\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.065755 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities" (OuterVolumeSpecName: "utilities") pod "d74bb400-382c-4844-9ebc-8fbc21724247" (UID: "d74bb400-382c-4844-9ebc-8fbc21724247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.066123 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmhw\" (UniqueName: \"kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw\") pod \"d74bb400-382c-4844-9ebc-8fbc21724247\" (UID: \"d74bb400-382c-4844-9ebc-8fbc21724247\") " Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.067020 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.074759 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw" (OuterVolumeSpecName: "kube-api-access-dcmhw") pod "d74bb400-382c-4844-9ebc-8fbc21724247" (UID: "d74bb400-382c-4844-9ebc-8fbc21724247"). InnerVolumeSpecName "kube-api-access-dcmhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.125141 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d74bb400-382c-4844-9ebc-8fbc21724247" (UID: "d74bb400-382c-4844-9ebc-8fbc21724247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.169228 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmhw\" (UniqueName: \"kubernetes.io/projected/d74bb400-382c-4844-9ebc-8fbc21724247-kube-api-access-dcmhw\") on node \"crc\" DevicePath \"\"" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.169257 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d74bb400-382c-4844-9ebc-8fbc21724247-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.260259 4756 generic.go:334] "Generic (PLEG): container finished" podID="d74bb400-382c-4844-9ebc-8fbc21724247" containerID="9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23" exitCode=0 Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.260479 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerDied","Data":"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23"} Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.260701 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmgsc" event={"ID":"d74bb400-382c-4844-9ebc-8fbc21724247","Type":"ContainerDied","Data":"af94c436364cb5ff0d33cb9f5cbbda88bfb721c11934a54873a625b42591de2f"} Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.260729 4756 scope.go:117] "RemoveContainer" containerID="9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.260560 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmgsc" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.285365 4756 scope.go:117] "RemoveContainer" containerID="35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.313947 4756 scope.go:117] "RemoveContainer" containerID="e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.314780 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.326740 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hmgsc"] Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.386580 4756 scope.go:117] "RemoveContainer" containerID="9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23" Nov 24 14:03:32 crc kubenswrapper[4756]: E1124 14:03:32.387259 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23\": container with ID starting with 9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23 not found: ID does not exist" containerID="9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.387296 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23"} err="failed to get container status \"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23\": rpc error: code = NotFound desc = could not find container \"9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23\": container with ID starting with 9a56681a7bde7975370b00c43aeda219a931c47ba1add4fb78eeaf8d2112fa23 not found: ID does not exist" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.387317 4756 scope.go:117] "RemoveContainer" containerID="35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923" Nov 24 14:03:32 crc kubenswrapper[4756]: E1124 14:03:32.387691 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923\": container with ID starting with 35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923 not found: ID does not exist" containerID="35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.387718 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923"} err="failed to get container status \"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923\": rpc error: code = NotFound desc = could not find container \"35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923\": container with ID starting with 35eaeadf9a423fbafdc39624637d49c450fb0500184d2547aae153604faff923 not found: ID does not exist" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.387737 4756 scope.go:117] "RemoveContainer" containerID="e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a" Nov 24 14:03:32 crc kubenswrapper[4756]: E1124 14:03:32.388017 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a\": container with ID starting with e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a not found: ID does not exist" containerID="e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.388038 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a"} err="failed to get container status \"e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a\": rpc error: code = NotFound desc = could not find container \"e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a\": container with ID starting with e12ddff7787a31a4b8e70f154c706385dc748521ebe0da752cdaaa526ffe383a not found: ID does not exist" Nov 24 14:03:32 crc kubenswrapper[4756]: I1124 14:03:32.505934 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" path="/var/lib/kubelet/pods/d74bb400-382c-4844-9ebc-8fbc21724247/volumes" Nov 24 14:03:33 crc kubenswrapper[4756]: I1124 14:03:33.478980 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:03:33 crc kubenswrapper[4756]: I1124 14:03:33.479062 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.479451 4756 patch_prober.go:28] interesting pod/machine-config-daemon-8p8dh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.480017 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.480072 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.480816 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0"} pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.480907 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerName="machine-config-daemon" containerID="cri-o://20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" gracePeriod=600 Nov 24 14:04:03 crc kubenswrapper[4756]: E1124 14:04:03.617933 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.661302 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" exitCode=0 Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.661362 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" event={"ID":"f0f50ecd-811f-4df2-ae0c-83a787d6cbec","Type":"ContainerDied","Data":"20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0"} Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.661409 4756 scope.go:117] "RemoveContainer" containerID="1a32eeb64eb89d82e5f773623aaf3c9abc2217a542f84b0a5a5ac837f28a5018" Nov 24 14:04:03 crc kubenswrapper[4756]: I1124 14:04:03.662303 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:04:03 crc kubenswrapper[4756]: E1124 14:04:03.662716 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:04:15 crc kubenswrapper[4756]: I1124 14:04:15.475753 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:04:15 crc kubenswrapper[4756]: E1124 14:04:15.477038 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:04:21 crc kubenswrapper[4756]: I1124 14:04:21.997264 4756 scope.go:117] "RemoveContainer" containerID="c6e408dc2d55898cb8e10429713c886b15b3537178f38fdccb4843d12809333e" Nov 24 14:04:26 crc kubenswrapper[4756]: I1124 14:04:26.475894 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:04:26 crc kubenswrapper[4756]: E1124 14:04:26.476824 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:04:41 crc kubenswrapper[4756]: I1124 14:04:41.476651 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:04:41 crc kubenswrapper[4756]: E1124 14:04:41.477994 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:04:52 crc kubenswrapper[4756]: I1124 14:04:52.475708 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:04:52 crc kubenswrapper[4756]: E1124 14:04:52.476566 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:05:06 crc kubenswrapper[4756]: I1124 14:05:06.476335 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:05:06 crc kubenswrapper[4756]: E1124 14:05:06.477271 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:05:19 crc kubenswrapper[4756]: I1124 14:05:19.476836 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:05:19 crc kubenswrapper[4756]: E1124 14:05:19.478295 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:05:22 crc kubenswrapper[4756]: I1124 14:05:22.105707 4756 scope.go:117] "RemoveContainer" containerID="e4f54fa401158c2da56a9365e243db2b74512538175c8756205ecb931c768b09" Nov 24 14:05:33 crc kubenswrapper[4756]: I1124 14:05:33.476627 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:05:33 crc kubenswrapper[4756]: E1124 14:05:33.477890 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:05:48 crc kubenswrapper[4756]: I1124 14:05:48.484480 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:05:48 crc kubenswrapper[4756]: E1124 14:05:48.485479 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:01 crc kubenswrapper[4756]: I1124 14:06:01.476387 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:06:01 crc kubenswrapper[4756]: E1124 14:06:01.477243 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:12 crc kubenswrapper[4756]: I1124 14:06:12.476518 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:06:12 crc kubenswrapper[4756]: E1124 14:06:12.477925 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:27 crc kubenswrapper[4756]: I1124 14:06:27.476258 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:06:27 crc kubenswrapper[4756]: E1124 14:06:27.477073 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.551688 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:31 crc kubenswrapper[4756]: E1124 14:06:31.552860 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="extract-content" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.552880 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="extract-content" Nov 24 14:06:31 crc kubenswrapper[4756]: E1124 14:06:31.552901 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="registry-server" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.552911 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="registry-server" Nov 24 14:06:31 crc kubenswrapper[4756]: E1124 14:06:31.552925 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="extract-utilities" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.552934 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="extract-utilities" Nov 24 14:06:31 crc kubenswrapper[4756]: E1124 14:06:31.552996 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="copy" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.553008 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="copy" Nov 24 14:06:31 crc kubenswrapper[4756]: E1124 14:06:31.553037 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="gather" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.553048 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="gather" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.553359 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="gather" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.553380 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3b518a-c3b0-47d5-84cc-8db7970134cc" containerName="copy" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.553412 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d74bb400-382c-4844-9ebc-8fbc21724247" containerName="registry-server" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.555808 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.571754 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.658339 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.658400 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.658437 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ddw\" (UniqueName: \"kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.760001 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.760053 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.760078 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ddw\" (UniqueName: \"kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.760840 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.760881 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.779178 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ddw\" (UniqueName: \"kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw\") pod \"redhat-operators-fbmh2\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:31 crc kubenswrapper[4756]: I1124 14:06:31.891628 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:32 crc kubenswrapper[4756]: I1124 14:06:32.408862 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:32 crc kubenswrapper[4756]: I1124 14:06:32.432175 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerStarted","Data":"93f069cc9bb1db7e4db444cecb39342f7ee67a8740b4e06f134fca6d83549884"} Nov 24 14:06:33 crc kubenswrapper[4756]: I1124 14:06:33.449805 4756 generic.go:334] "Generic (PLEG): container finished" podID="b972f4d7-0507-4fc8-ab51-87acb563bdda" containerID="3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f" exitCode=0 Nov 24 14:06:33 crc kubenswrapper[4756]: I1124 14:06:33.449919 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerDied","Data":"3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f"} Nov 24 14:06:35 crc kubenswrapper[4756]: I1124 14:06:35.473367 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerStarted","Data":"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc"} Nov 24 14:06:36 crc kubenswrapper[4756]: I1124 14:06:36.486822 4756 generic.go:334] "Generic (PLEG): container finished" podID="b972f4d7-0507-4fc8-ab51-87acb563bdda" containerID="ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc" exitCode=0 Nov 24 14:06:36 crc kubenswrapper[4756]: I1124 14:06:36.486927 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerDied","Data":"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc"} Nov 24 14:06:37 crc kubenswrapper[4756]: I1124 14:06:37.505068 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerStarted","Data":"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853"} Nov 24 14:06:37 crc kubenswrapper[4756]: I1124 14:06:37.529037 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbmh2" podStartSLOduration=3.022693945 podStartE2EDuration="6.529015527s" podCreationTimestamp="2025-11-24 14:06:31 +0000 UTC" firstStartedPulling="2025-11-24 14:06:33.45345825 +0000 UTC m=+5925.810972432" lastFinishedPulling="2025-11-24 14:06:36.959779842 +0000 UTC m=+5929.317294014" observedRunningTime="2025-11-24 14:06:37.524586058 +0000 UTC m=+5929.882100230" watchObservedRunningTime="2025-11-24 14:06:37.529015527 +0000 UTC m=+5929.886529679" Nov 24 14:06:39 crc kubenswrapper[4756]: I1124 14:06:39.476964 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:06:39 crc kubenswrapper[4756]: E1124 14:06:39.487518 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:41 crc kubenswrapper[4756]: I1124 14:06:41.891865 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:41 crc kubenswrapper[4756]: I1124 14:06:41.892227 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:42 crc kubenswrapper[4756]: I1124 14:06:42.954055 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbmh2" podUID="b972f4d7-0507-4fc8-ab51-87acb563bdda" containerName="registry-server" probeResult="failure" output=< Nov 24 14:06:42 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Nov 24 14:06:42 crc kubenswrapper[4756]: > Nov 24 14:06:51 crc kubenswrapper[4756]: I1124 14:06:51.951976 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:52 crc kubenswrapper[4756]: I1124 14:06:52.009468 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:52 crc kubenswrapper[4756]: I1124 14:06:52.192313 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:53 crc kubenswrapper[4756]: I1124 14:06:53.674798 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbmh2" podUID="b972f4d7-0507-4fc8-ab51-87acb563bdda" containerName="registry-server" containerID="cri-o://5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853" gracePeriod=2 Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.178113 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.247018 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities\") pod \"b972f4d7-0507-4fc8-ab51-87acb563bdda\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.247101 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content\") pod \"b972f4d7-0507-4fc8-ab51-87acb563bdda\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.247135 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ddw\" (UniqueName: \"kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw\") pod \"b972f4d7-0507-4fc8-ab51-87acb563bdda\" (UID: \"b972f4d7-0507-4fc8-ab51-87acb563bdda\") " Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.248006 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities" (OuterVolumeSpecName: "utilities") pod "b972f4d7-0507-4fc8-ab51-87acb563bdda" (UID: "b972f4d7-0507-4fc8-ab51-87acb563bdda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.256596 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw" (OuterVolumeSpecName: "kube-api-access-p9ddw") pod "b972f4d7-0507-4fc8-ab51-87acb563bdda" (UID: "b972f4d7-0507-4fc8-ab51-87acb563bdda"). InnerVolumeSpecName "kube-api-access-p9ddw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.349857 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.350269 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9ddw\" (UniqueName: \"kubernetes.io/projected/b972f4d7-0507-4fc8-ab51-87acb563bdda-kube-api-access-p9ddw\") on node \"crc\" DevicePath \"\"" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.350581 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b972f4d7-0507-4fc8-ab51-87acb563bdda" (UID: "b972f4d7-0507-4fc8-ab51-87acb563bdda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.453344 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b972f4d7-0507-4fc8-ab51-87acb563bdda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.476377 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:06:54 crc kubenswrapper[4756]: E1124 14:06:54.476796 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.686786 4756 generic.go:334] "Generic (PLEG): container finished" podID="b972f4d7-0507-4fc8-ab51-87acb563bdda" containerID="5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853" exitCode=0 Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.686873 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerDied","Data":"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853"} Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.686917 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbmh2" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.687734 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbmh2" event={"ID":"b972f4d7-0507-4fc8-ab51-87acb563bdda","Type":"ContainerDied","Data":"93f069cc9bb1db7e4db444cecb39342f7ee67a8740b4e06f134fca6d83549884"} Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.687754 4756 scope.go:117] "RemoveContainer" containerID="5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.723214 4756 scope.go:117] "RemoveContainer" containerID="ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.724354 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.734204 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbmh2"] Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.760464 4756 scope.go:117] "RemoveContainer" containerID="3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.822014 4756 scope.go:117] "RemoveContainer" containerID="5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853" Nov 24 14:06:54 crc kubenswrapper[4756]: E1124 14:06:54.822614 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853\": container with ID starting with 5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853 not found: ID does not exist" containerID="5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.822653 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853"} err="failed to get container status \"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853\": rpc error: code = NotFound desc = could not find container \"5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853\": container with ID starting with 5a33c0f98b7b1561599991bf09b51e222f4f01b15379df5287e03d62b8d44853 not found: ID does not exist" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.822678 4756 scope.go:117] "RemoveContainer" containerID="ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc" Nov 24 14:06:54 crc kubenswrapper[4756]: E1124 14:06:54.823210 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc\": container with ID starting with ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc not found: ID does not exist" containerID="ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.823272 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc"} err="failed to get container status \"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc\": rpc error: code = NotFound desc = could not find container \"ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc\": container with ID starting with ac0a17ac941f8397eaf9a16a47b7b035c98ad3309ce963389cd0c9088d4e38fc not found: ID does not exist" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.823297 4756 scope.go:117] "RemoveContainer" containerID="3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f" Nov 24 14:06:54 crc kubenswrapper[4756]: E1124 14:06:54.823958 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f\": container with ID starting with 3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f not found: ID does not exist" containerID="3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f" Nov 24 14:06:54 crc kubenswrapper[4756]: I1124 14:06:54.824084 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f"} err="failed to get container status \"3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f\": rpc error: code = NotFound desc = could not find container \"3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f\": container with ID starting with 3bfc29f98a2d1c660583e5d8e5f11b7eb8538aa4ef91b52614e36f3caecd862f not found: ID does not exist" Nov 24 14:06:56 crc kubenswrapper[4756]: I1124 14:06:56.487799 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b972f4d7-0507-4fc8-ab51-87acb563bdda" path="/var/lib/kubelet/pods/b972f4d7-0507-4fc8-ab51-87acb563bdda/volumes" Nov 24 14:07:09 crc kubenswrapper[4756]: I1124 14:07:09.475433 4756 scope.go:117] "RemoveContainer" containerID="20c4bb1316f6a8321ed3b128537fea40630b78cbf4d55469a52da80d60d289e0" Nov 24 14:07:09 crc kubenswrapper[4756]: E1124 14:07:09.476076 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8p8dh_openshift-machine-config-operator(f0f50ecd-811f-4df2-ae0c-83a787d6cbec)\"" pod="openshift-machine-config-operator/machine-config-daemon-8p8dh" podUID="f0f50ecd-811f-4df2-ae0c-83a787d6cbec"